ABSTRACT Expecting the Unexpected Adaptation for Predictive Energy Conservation
- 格式:pdf
- 大小:231.71 KB
- 文档页数:5
在MATLAB中遇到"syntax error, unexpected parameters" 的错误,通常表示MATLAB 遇到了无法识别或不符合语法规则的参数。
以下是一些可能的原因和解决方法:1. 拼写或语法错误:检查你的函数名、变量名和关键词是否拼写正确,以及是否符合MATLAB 的命名规则。
确保所有的括号、引号和逗号等符号都已正确配对和使用。
2. 参数数量不匹配:检查你调用函数时提供的参数数量是否与函数定义时的参数数量一致。
3. 数据类型不匹配:确保传递给函数的参数数据类型与函数期望的数据类型相匹配。
4. 未定义的函数或变量:确保你在调用的函数或变量在当前工作空间中已经定义。
5. 缺失的结束括号或分号:在MATLAB 中,语句通常以分号结尾,而函数或数组定义需要正确的开始和结束括号。
6. 意外的字符或符号:检查你的代码中是否有意外的特殊字符或者转义字符,这些可能会导致语法错误。
7. MATLAB版本兼容性问题:如果你的代码是在不同版本的MATLAB 上编写的,可能存在版本兼容性问题。
确保你的代码与当前MATLAB 版本的语法兼容。
要解决这个问题,你可以按照以下步骤进行:仔细检查报错行及其周围的代码,看看是否存在上述提到的问题。
使用MATLAB 的内置编辑器功能,如语法高亮和自动完成功能,可以帮助你发现潜在的错误。
分段调试你的代码,逐行运行或者使用dbstop if error 设置断点来定位错误发生的具体位置。
如果你仍然无法确定问题所在,尝试将出错的部分代码简化或者分段,以便更好地隔离和识别问题。
记住,MATLAB 的错误信息通常会提供关于错误发生位置和可能原因的详细信息,充分利用这些信息可以帮助你更快地找到并解决问题。
C#中abstract的⽤法详解abstract可以⽤来修饰类,⽅法,属性,索引器和时间,这⾥不包括字段. 使⽤abstrac修饰的类,该类只能作为其他类的基类,不能实例化,⽽且abstract修饰的成员在派⽣类中必须全部实现,不允许部分实现,否则编译异常. 如:using System;namespace ConsoleApplication8{class Program{static void Main(string[] args){BClass b = new BClass();b.m1();}}abstract class AClass{public abstract void m1();public abstract void m2();}class BClass : AClass{public override void m1(){throw new NotImplementedException();}//public override void m2()//{// throw new NotImplementedException();//}}}Abstract classes have the following features:抽象类拥有如下特征:1,抽象类不能被实例化, 但可以有实例构造函数, 类是否可以实例化取决于是否拥有实例化的权限 (对于抽象类的权限是abstract, 禁⽌实例化),即使不提供构造函数, 编译器也会提供默认构造函数;2,抽象类可以包含抽象⽅法和访问器;3,抽象类不能使⽤sealed修饰, sealed意为不能被继承;4,所有继承⾃抽象类的⾮抽象类必须实现所有的抽象成员,包括⽅法,属性,索引器,事件;abstract修饰的⽅法有如下特征:1,抽象⽅法即是虚拟⽅法(隐含);2,抽象⽅法只能在抽象类中声明;3,因为抽象⽅法只是声明, 不提供实现, 所以⽅法只以分号结束,没有⽅法体,即没有花括号部分;如public abstract void MyMethod();4,override修饰的覆盖⽅法提供实现,且只能作为⾮抽象类的成员;5,在抽象⽅法的声明上不能使⽤virtual或者是static修饰.即不能是静态的,⼜因为abstract已经是虚拟的,⽆需再⽤virtual强调.抽象属性尽管在⾏为上与抽象⽅法相似,但仍有有如下不同:1,不能在静态属性上应⽤abstract修饰符;2,抽象属性在⾮抽象的派⽣类中覆盖重写,使⽤override修饰符;抽象类与接⼝:1,抽象类必须提供所有接⼝成员的实现;2,继承接⼝的抽象类可以将接⼝的成员映射位抽象⽅法.如:interface I{void M();}abstract class C: I{public abstract void M();}抽象类实例:// abstract_keyword.cs// 抽象类using System;abstract class BaseClass // 抽象类{protected int _x = 100; //抽象类可以定义字段,但不可以是抽象字段,也没有这⼀说法.protected int _y = 150;public BaseClass(int i) //可以定义实例构造函数,仅供派⽣的⾮抽象类调⽤; 这⾥显式提供构造函数,编译器将不再提供默认构造函数. {fielda = i;}public BaseClass(){}private int fielda;public static int fieldsa = 0;public abstract void AbstractMethod(); // 抽象⽅法public abstract int X { get; } //抽象属性public abstract int Y { get; }public abstract string IdxString { get; set; } //抽象属性public abstract char this[int i] { get; } //抽象索引器}class DerivedClass : BaseClass{private string idxstring;private int fieldb;//如果基类中没有定义⽆参构造函数,但存在有参数的构造函数,//那么这⾥派⽣类得构造函数必须调⽤基类的有参数构造函数,否则编译出错public DerivedClass(int p): base(p) //这⾥的:base(p)可省略,因为基类定义了默认的⽆参构造函数{fieldb = p;}public override string IdxString //覆盖重新属性{get{return idxstring;}set{idxstring = value;}}public override char this[int i] //覆盖重写索引器{get { return IdxString[i]; }}public override void AbstractMethod(){_x++;_y++;}public override int X // 覆盖重写属性{get{return _x + 10;}}public override int Y // 覆盖重写属性{get{return _y + 10;}}static void Main(){DerivedClass o = new DerivedClass(1);o.AbstractMethod();Console.WriteLine("x = {0}, y = {1}", o.X, o.Y);}}以上所述是⼩编给⼤家介绍的C#中abstract的⽤法详解,希望对⼤家有所帮助,如果⼤家有任何疑问请给我留⾔,⼩编会及时回复⼤家的。
java abstractvariadicfunction函数-回复Java是一种面向对象的编程语言,具有许多强大的功能和特性,其中之一是它的抽象可变参数函数(Abstract Variadic Function)。
抽象可变参数函数是指可以接受不确定数量的参数并且在函数中以数组的形式进行处理。
在本文中,我们将逐步介绍如何在Java中创建和使用抽象可变参数函数。
首先,让我们来了解一下为什么抽象可变参数函数在Java中非常有用。
使用抽象可变参数函数,我们可以在不事先确定参数数量的情况下编写更加灵活和可扩展的代码。
这意味着我们可以在调用函数时传递任意数量的参数,并且函数可以自动将这些参数组合成一个数组来进行处理。
这对于处理不确定数量的输入非常方便,尤其是在我们需要编写通用的函数时。
为了创建一个抽象可变参数函数,我们需要使用省略号(Ellipsis)作为参数的类型,并给参数一个名称。
下面是一个示例:javapublic void abstractVariadicFunction(String... params) { 处理参数的代码}在这个示例中,我们创建了一个名为abstractVariadicFunction的函数,它接受一个名为params的可变参数。
params参数可以是任意数量的字符串参数,并且在函数中将以数组的形式进行处理。
现在,让我们来看看如何在函数内部使用这些可变参数。
由于params参数是一个数组,我们可以使用for循环或者其他数组操作来处理这些参数。
以下是一个示例:javapublic void abstractVariadicFunction(String... params) {for (String param : params) {System.out.println(param);}}在这个示例中,我们遍历params数组并打印出数组中的每个元素。
我们可以调用这个函数并传递任意数量的参数,所有的参数都将被打印出来。
there are no expected calls of the method 在软件开发中,经常会遇到一种情况,即在代码中存在未被使用到的方法。
这些方法被称为“没有被预期调用的方法”。
这个问题可能会让开发者感到无所适从,因为它可能会导致潜在的安全风险和额外的开销。
本文将介绍如何解决这个问题以及如何避免它的产生。
首先,我们需要理解为什么会出现没有被预期调用的方法。
通常情况下,这是由于代码的复杂性和变化性造成的。
在软件开发中,代码的变化是一个不可避免的过程。
可能会出现一些方法在过去被频繁使用,但现在已经不再使用了。
这些方法可能被遗留在代码中,但没有被删除。
另一种情况是,开发者可能在代码中添加了一些新的方法,但由于某些原因,这些方法没有被完全使用。
不管原因是什么,这些没有被预期调用的方法都需要被解决。
要解决这个问题,我们需要使用静态代码分析工具。
这些工具可以扫描代码库并检测未被使用的方法。
在发现这些方法后,我们需要决定是删除这些方法还是将它们标记为已弃用。
如果我们决定删除这些方法,需要确保它们不会影响到其他代码的正常运行。
另外,如果我们将这些方法标记为已弃用,则需要确保其他开发者不会再使用这些方法。
避免产生未被使用的方法的最好方法是在代码开发的早期进行代码审查。
在代码审查过程中,我们可以检查每个方法的用途以及是否需要被使用。
如果一个方法不再需要使用,应该被及时删除。
此外,我们还可以使用单元测试来确保代码的正确性。
通过编写单元测试,我们可以尽早发现代码中的问题,包括未被使用的方法。
总之,没有被预期调用的方法是一个很常见的问题,但也是可以解决的。
我们需要使用静态代码分析工具来检测这些方法,并决定是删除它们还是将它们标记为已弃用。
要避免这个问题的产生,我们应该在代码开发的早期进行代码审查,并使用单元测试来确保代码的正确性。
通过这些方法,我们可以提高代码的质量并减少潜在的安全风险。
FREEPASCAL编译时的出错信息Free pascal编译时的出错信息1.Out of memory[内存溢出]2.Identifier expected[缺标识符]3.Identifier not found[标识符未找到]*如:Identifier not found INTEGR[标识符INTEGER未找到] 4.Duplicate identifier[重复说明]*如:Duplicate identifier N[变量N重复说明]5.Syntax error[语法错误]*6.Error in real constant[实型常量错]7.Error in integer constant[整型常量错]8.String constant exceeds line[字符串常量跨行]9.Too many nested file[文件嵌套过多]10.Unexpected end of file[非正常文件结束]11.Line to long[行过长]12.Type Identifier expected[缺类型标识符]13.Too many open file[打开文件过多]14.Invalid file name[无效文件名]15.File not found[文件未找到]*16.Disk full[磁盘满]17.Invalid compiler directive[无效编译指示]18.Too many file[文件过多]19.Undefined type in pointer definition[指针定义中未定义类型]20.Variable identifier expected[缺变量标识符]21.Error in type definition[类型错误说明]*22.Stucture too large[结构过长]23.Set base type out of range[集合基类型越界]24.File components may not be files or object[FILE分量不能为文件或对象]25.Invalid string length[无效字符串长度]26.Type mismatch[类型不匹配]*27.Invalid subrange base type[无效子界基类型]28.Lower bound greater than upper bound[下界大于上界]29.Ordinal type expected[缺有序类型]30.Integer constant expected[缺整型常数]31.Constant expected[缺常量]32.Integer or real constant expected[缺整型或实型常量]33.Pointe type identifier expected[缺指针类型标识符]34.Invalid function result type[无效的函数结果类型]/doc/186557222.html,bel identifier expected[缺标号标识符]36.Begin expected[缺BEGIN]*37.End expected[缺END]*38.Integer expression expected[缺整型表达式]39.Ordinal expression expected[缺有序表达式]40.Boolean expression expected[缺布尔表达式]41.Operand type do not match operator[操作数与操作符不匹配]42.Error in expression[表达式错]43.Illegal expression[非法赋值]*44.Field identifier expected[缺域标识符]45.Object file too large[目标文件过大]46.Undefined external[未定义外部标识符]47.Invalid object file record[无效OBJ文件记录]48.Code segment too large[代码段过长]49.Data segment too large[数据段过长]*50.Do expected[缺DO]*51.Invalid PUBLIC definition[无效PUBLIC定义]52.Invalid EXTRN definition[无效EXTRN定义]53.Too many EXTRN definition[EXTRN定义过多]54.Of extected[缺0F]*55.INTERFACE expected[缺INTERFACE]56.Invalid relocatable reference[无效重定位引用]57.THEN expected[缺THEN]*58.TO(DOWNTO)expected[缺T0或DOWNTO]*59.Undefined forward[提前引用未定义的说明]60.Too many procedures[过程过多]61.Invalid typecast[无效类型转换]62.Division by zero[被零除]63.Invalid typecast[无效文件类型]64.Cannot Read or Write variable of this type[不能读写该类型的变量]*65.Ponter variable expected[缺指针变量]66.String variable expected[缺字符串变量]67.String expression expected[缺字符串表达式]68.Circular unit reference[单元循环引用]69.Unit name mismatchg[单元名不匹配]70.Unit version mismatch[单元版本不匹配]71.Duplicate unit name[单元重名]72.Unit file format error[单元文件格式错误]73.Implementation expected[缺IMPLEMENTATl0N]74.constant and case types do not match[常数与CASE类型不相匹配]75.Record variable expected[缺记录变量]76.Constant out of range[常量越界]77.File variable expected[缺文件变量]78.Pointer extression expected[缺指针变量]79.Integer or real expression expected[缺整型或实型表达式]/doc/186557222.html,ble not within currentblock[标号不在当前块中]/doc/186557222.html,ble already defined[标号已定义]82.Undefined lable in preceding statement part[在前面语句中标号未定义]83.Invalid@argument[无效的@参数]84.Unit expected[缺UNIT]85.“;”expected[缺“;”]*86.“:”expected[缺“:”]*87.“,”expected[缺“,”]*88.“(”expected[缺“(”)*89.“)”ex pected[缺“]”]*90.“=”expected[缺“=”]*91.“:=”expected[缺“:=”]*92.“[”or“(”expected[缺“[”或“(”)*93.“]”or“)”expected[缺“]”或“)”]*94.“..”expected[缺“.”]*95.“..”expected[缺“..”]*96.Too many variable[变量过多]97.Invalid FOR control variable[无效FOR控制变量]98.Integer variable expected[缺整型变量]99.File and procedure types are not allowed here[此处不允许用文件和过程类型] 100.Srting length mismatch[字符串长度不匹配] 101.Invalid ordering of fields[无效域顺序]102.String constant expected[缺字符串常量]103.Integer or real variable expected[缺整型或实型变量]104.Ordinal variable expected[缺顺序变量]105.INLINE error[INLINE错]106.Character expression expected[缺字符表达式]107.Too many relocation items[重定位项过多]112.Case constant out of range[CASE常量越界]113.Error in statement[语句错]114.Can’t call an interrupt procedute[不能调用中断过程]116.Must be in8087mode to complie this[必须在8087方式下编译]117.Target address not found[未找到目标地址]118.Include files are not allowed here[此处不允许包含INCLUDE文件]120.NIL expected[缺NIL]121.Invalid qualifier[无效限定符]122.Invalid variable reference[无效变量引用]123.Too many symbols[符号过多]124.Statement part too large[语句部分过长]126.Files must be var parameters[文件必须为变量参数]127.Too many conditional directive[条件符号过多]128.Misplaced conditional directive[条件指令错位]129.ENDIF directive missing[缺少ENDIF指令]130.Error in initial conditional defines[初始条件定义错]131.Header does not match previous definition[过程和函数头与前面定义的不匹配] 132.Critical disk error[严重磁盘错误] 133.Can’t evalute this expression[不能计算该表达式]*如:Can’t evalute constart expression[不能计算该常量表达式] 134.Expression incorrectly terminated[表达式错误结束]135.Invaild format specifier[无效格式说明符]136.Invalid indirect reference[无效间接引用]137.Structed variable are not allowed here[此处不允许结构变量]138.Can’t evalute without system unit[无SYSTEM单元不能计算]139.Can’t access this symbols[不能存取该符号]140.Invalid floating–point operation[无效浮点运算]141.Can’t compile overlays to memory[不能将覆盖模块编译至内存]142.Procedure or function variable expected[缺过程和函数变量]143.Invalid procedure or function reference.[无效过程或函数引用]144.Can’t overlay this unit[不能覆盖该单元]147.Object type expected[缺对象类型]148.Local object types are not allowed[不允许局部对象类型] 149.VIRTUAL expected[缺VIRTUAL]150.Method identifier expected[缺方法标识符]151.Virtual constructor are not allowed[不允许虚拟构造方法] 152.Constructor Identifier expected[缺构造函数标识符]153.Destructor Identifier expected[缺析构函数标识符]154.Fail only allowed within constructors[FAIL标准过程只允许在构造方法内使用] 155.Invalid combination of opcode and operands[无效的操作符和操作数组合] 156.Memory reference expected[缺内存引用]157.Can’t add or subtrace relocatable symbols[不能加减可重定位符号]158.Invalid register combination[无效寄存器组合]159.286/287Instructions are not enabled[未激活286/287指令]160.Invalid symbol reference[无效符号引用]161.Code generation error[代码生成错]162.ASM expected[缺ASM]。
abstractmethoderror案例1. 引言abstractmethoderror是Python中的一个常见错误类型,通常在使用抽象基类或继承的过程中发生。
在本文中,我们将深入探讨abstractmethoderror的定义、原因、解决方法,并通过案例分析来帮助读者更好地理解和避免这一错误。
2. 什么是abstractmethoderror?abstractmethoderror是指在Python中使用抽象基类(Abstract Base Classes, ABCs)或继承时,如果未实现抽象方法就会触发的错误。
在面向对象编程中,抽象基类是指包含抽象方法的类,这些方法必须在子类中被具体实现。
如果子类没有实现所有抽象方法,就会导致abstractmethoderror。
3. 导致abstractmethoderror的原因导致abstractmethoderror的原因通常有两种:3.1. 没有实现抽象方法:在子类中没有实现抽象基类中定义的抽象方法。
3.2. 调用未实现的抽象方法:在子类中调用了抽象基类中定义的但未被实现的抽象方法。
4. 解决abstractmethoderror的方法为了避免abstractmethoderror,我们可以采取以下方法来解决:4.1. 实现抽象方法:在子类中必须实现所有抽象基类中定义的抽象方法,确保不会出现未实现的情况。
4.2. 使用@abstractmethod装饰器:在抽象基类中使用@abstractmethod装饰器来标记抽象方法,以确保子类必须实现这些方法。
5. 案例分析让我们通过一个简单的案例来更好地理解abstractmethoderror。
假设我们有一个抽象基类Animal,其中定义了一个抽象方法speak(),我们再定义一个子类Dog,但在Dog类中忘记实现speak()方法。
当我们调用Dog类的speak()方法时,就会触发abstractmethoderror。
java语言的异常处理的主要子类JAVA语言的异常处理的主要子类:Error类子类名说明AbstractMethodError 调用抽象方法错误 ClassFormatError 类文件格式错误IllegalAccessError 非法访问错误 IncompatibleClassChangeError 非法改变一个类错误 InstantiationError 实例化一个接口或抽象类错误 IntemalError JAVA 内部错误 LinkageError 连接失败错误 NoClassDefFoundError 找不到类定义错误NoSuchFieldError 域未找到错误 NoSuchMethodError 调用不存在的方法错误OutOfMemoryError 内存溢出错误 StackOverflowError 堆栈溢出错误TheradDeadError 线程死亡错误 UnknownError 未知错误 UnstatisfideLinkError 链接不满足错误 VerifyError 校验失败错误 VirtualMachineError 虚拟机错误Exception类子类名说明 ClassNotFoundException 类未找到异常DataFormatException 数据格式异常 IllegalAceessException 非法存取异常InstantiationException 实例化异常 InterruptedException 中断异常NoSuchMethodException 调用不存在方法异常 RuntimeException 运行时异常RuntimeException类子类名说明 ArithmeticException 算术异常ArrayIndexOutOfBoundsException 数组越界异常 ArrayStoreException 数组存储异常 ClassCastException 类强制转换异常 IllegalArgumenException 非法参数异常 IllegalThreadStateException 非法线程状态异常IndexOutOfBoundException 索引越界异常 NumberFormatException 数值格式异常NegativeArraySizeException 负值数组大小异常 NullPointerException 空引用异常 SecurtyException 安全异常 StringIndexOutOfBoundsException 字符串越界异常下面是赠送的团队管理名言学习,不需要的朋友可以编辑删除!!!谢谢!!!1、沟通是管理的浓缩。
C++中abstract修饰类的用法1. 概述在C++中,我们经常会听到关于abstract类的概念。
那么,abstract 类到底是什么?它又有什么作用呢?2. 什么是abstract类在C++中,我们可以使用关键字“abstract”来修饰一个类,使其成为一个“abstract类”。
一个abstract类是一种不能被实例化的类,即不能创建它的对象。
abstract类通常用于定义接口和抽象的行为,它的目的是为了让其他类继承并实现它的纯虚函数。
3. abstract类的定义要定义一个abstract类,我们可以在类中声明纯虚函数。
纯虚函数是指在类中声明但没有实现的虚函数。
通过在函数声明后面加上“= 0”来将一个虚函数声明为纯虚函数。
例如:```C++class AbstractClass {public:virtual void pureVirtualFunction() = 0;};```4. abstract类的作用abstract类的作用主要有以下几点:- 定义接口:abstract类定义了一组接口,表示了一种抽象的行为。
其他类可以继承并实现这些接口。
这样一来,我们就可以通过基类指针来调用派生类的函数。
- 特定行为的约束:abstract类可以约束其派生类必须实现某些特定的行为。
这样一来,我们就可以确保派生类都具有相同的接口,从而提高代码的一致性和可维护性。
- 防止实例化:abstract类的对象不能被创建,这可以防止程序员错误地使用该类,从而避免一些潜在的错误。
5. 如何使用abstract类在C++中,我们可以通过继承abstract类并实现其中定义的纯虚函数来使用abstract类。
例如:```C++class ConcreteClass : public AbstractClass {public:void pureVirtualFunction() override {// 实现纯虚函数的具体逻辑}};```在上面的例子中,ConcreteClass继承了AbstractClass,并实现了其中定义的纯虚函数pureVirtualFunction。
must implement the inherited abstract method
当一个类继承自一个抽象类时,它必须实现该抽象类中的所有抽象方法。
如果在子类中没有实现父类中的抽象方法,编译器将会报出“must implement the inherited abstract method”错误。
例如,假设有一个抽象类Animal,其中有一个抽象方法eat。
现在我们创建一个子类Dog继承自Animal,但是没有实现eat方法,代码将会报错。
```
public abstract class Animal {
public abstract void eat();
}
public class Dog extends Animal {
// 缺少 eat 方法
}
```
在这种情况下,编译器将会报错“Dog must implement the inherited abstract method eat() from Animal”。
为了解决这个错误,我们需要在Dog类中实现eat方法。
```
public class Dog extends Animal {
@Override
public void eat() {
System.out.println('Dog is eating.');
}
}
```
这样就可以成功编译并运行程序了。
因此,一旦一个类继承自一个抽象类,就必须实现所有的抽象方法,否则编译器将会报错。
Expecting the Unexpected: Adaptation for Predictive Energy Conservation Jeffrey P.Rybczynski,Darrell D.E.Long†Ahmed Amer‡Storage Systems Research Center Department of Computer Science University of California,Santa Cruz University of PittsburghABSTRACTThe use of access predictors to improve storage device per-formance has been investigated for both improving access times,as well as a means of reducing energy consumed by the disk.Such predictors also offer us an opportunity to demonstrate the benefits of an adaptive approach to han-dling unexpected workloads,whether they are the result of natural variation or deliberate attempts to generate a prob-lematic workload.Such workloads can pose a threat to sys-tem availability if they result in the excessive consumption of potentially limited resources such as energy.We pro-pose that actively reshaping a disk access workload,using a dynamically self-adjusting access predictor,allows for con-sistently good performance in the face of varying workloads. Specifically,we describe how our Best Shifting prefetching policy,by adapting to the needs of the currently observed workload,can use15%to35%less energy than traditional disk spin-down strategies and5%to10%less energy than the use of afixed prefetching policy.Categories and Subject DescriptorsD.4.3[Operating Systems]:File Systems Management;D.4.5[Operating Systems]:Reliability;D.4.6[Operating Systems]:Security and Protection;H.3.m[Information Storage and Retrieval]:Miscellaneous—Caching,Prefetch-ing,Power Management,StorageGeneral TermsAlgorithms,Design,Security,PerformanceKeywordsMobile Computing,Power Management,Adaptive Policies, Prediction,Prefetching,Disk Spin-down†Supported in part by the National Science Foundation un-der award CCR-0204358‡Supported in part by the National Science Foundation un-der award ANI-0325353.Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on thefirst page.To copy otherwise,to republish,to post on servers or to redistribute to lists,requires prior specific permission and/or a fee.StorageSS’05,November11,2005,Fairfax,Virginia,USA.Copyright2005ACM1-59593-223-X/05/0011...$5.00.1.INTRODUCTIONIn addition to protecting storage and data from intru-sion and misuse,ensuring the security of a storage device includes guaranteeing the system’s continued availability in the face of differing workloads.Unforeseen workloads can be the result of unexpected changes in load or application behavior,or even the result of malicious and deliberate user activity.In the latter case,it may be possible to limit re-quests from problematic client systems or applications,but even with a nominally light load a workload can be harmful to the system.For example,two clients making the exact same number of requests to a disk subsystem can result in radically different energy consumption by the system.To see this,consider a set of read requests that are presented to a disk in one burst.After these requests are satisfied,and a timeout period of inactivity has passed,the disk may en-ter a low-power state to conserve energy and remain in that state until the next set of requests arrives.On the other hand,if the exact same number of requests arrive with an inter-arrival time that is slightly greater than the inactivity timeout of the disk,it will continuously enter a low power state,only to be returned to an active state almost immedi-ately afterwards.This behavior leads to the consumption of excess energy at each such awakening,the disk spin-up cost expended to bring the disk back to active state,resulting in a great deal of excess energy being expended.As long as afixed spin-down policy is employed,the system is vulner-able to encountering such unexpected problem workloads, whether they be deliberately or naturally pathological. One method to avoid such problematic workloads is to develop fully adaptive strategies for managing the disk sub-system.By continually and dynamically adapting the disk spin-down and prefetching policies in response to the cur-rent workload,such strategies provide a system that utilizes the best policy available in the face of the current workload, regardless of how such a workload may vary.In this case we look at the problem of conserving disk energy consumption, but in doing so we are also effectively minimizing total disk motion and reducing unnecessary spin-ups and spin-downs. This in turn implies a reduction in overall mechanical wear and an increase in the reliability and availability of the disk and the system as a whole.In this manner,a policy aimed at reducing disk energy consumption and activity can miti-gate the effects of malicious and pathological workloads,as well as increasing the overall longevity of hard drives. While processors are still the main consumer of system power,it has been shown that the hard disk can use up to 30%of the total system energy[9],making the disk sub-system a prime candidate for energy conservation.Much research has been dedicated to conserving energy,particu-larly in mobile environments.We use prediction and its ap-plication to energy conservation as a means to illustrate the benefits of adaptive and self-optimizing strategies in the face of varying workloads.We contend that even in applications where prediction is the goal,such adaptive management is advantageous in handling the inevitably unpredictable and variant.2.PREDICTORS AND DISK POWERDisk systems use a significant amount of energy.Unlike most electronics in a computer,the disk has mechanical com-ponents.The spinning disk platters and the actuator arm require a considerable amount of energy to start operation. Powering down the disk to conserve energy is therefore only worthwhile if the disk can remain idle long enough to con-serve as much as the additional energy that would be needed to spin the disk back up again.Aside from intelligently de-ciding whether to spin the disk down for each idle period (dynamic disk spin-down),another technique is to actively reshape the workload by prefetching data that will be re-quested in the future,which would result in longer periods of inactivity between such bursts.Different access prediction policies can meet with varying success for different work-loads,and it is interesting to note that the most accurate predictors are not necessarily the most beneficial for conserv-ing energy.Our Best Shifting policy provides an effective energy-conserving predictor,while automatically updating its prediction policy in light of the current workload.2.1Accurate Prediction is Not OptimalTo allow our disk spin-down policy the longest idle peri-ods,a perfect predictor will need to both prefetch read data, as well as judiciously delay or accelerate write-backs of mod-ified data back to the disk.To test our dynamic policy,and competing predictors,we evaluate the performance of such a perfect oracle for each test workload.Simply prefetching the next N items(even if you are per-fectly accurate)is not always the best strategy.Assume that we have an access pattern which contains items A,B,C and D,and we are trying to create long disk idle periods so we can spin the disk down.Now,assume the following access sequence:ABABCDDDBBBB(shown in Figure1)and that we have a cache with a capacity of2items.If we assume that all items are predictable,then predicting the next items thatfit in the cache will get us an access pattern as shown in Figure1(b),with three idle periods of length three.As we can see from the sequence,if we fetch D and prefetch B towards the end of the trace(i.e.,delaying the eviction of D),then we can reduce this to only two idle periods,one of length three and the other of length six(Figure1(c)). Through dynamic programming we identify the behavior of a perfect oracle,rather than relying on simply prefetching the next N items which,as was shown in Figure1(b),is not an optimal strategy.The oracle works on the principal that all accesses are in one of three categories,either they are predictable,un-predictable,or may be delayed.Predictable accesses are for data that has been requested before,and assuming a prefetching algorithm was smart enough,could be predicted and prefetched.An unpredictable disk access,whether it is a write that needs to happen immediately or just afile that has not been requested before,refers to accesses that cannot be predicted and will always result in disk activity.Accesses that can be delayed are a special case.These are accesses which can be postponed for at most a given time period.Af-ter that time period expires,if the data has yet to be written to disk or read,it becomes equivalent to an unpredictable disk access that must happen immediately.An example of an access that can be postponed is a write request stored in a write buffer and waiting for aflexible time-out before be-ing written to disk.This allows the system to batch writes with other requests based on the current state of the disk so that they can all go to the disk at the same time,creating a busier burst period and a possibly longer idle period.Weis-sel et al.[18]have also shown thatflexible write time-outs can be used to batch disk requests and save disk power. The oracle is also optimal in terms of the spin-down policy. If the idle period is long enough to make a spin-down effi-cient,then this perfect oracle assumes a spin-down occurs at the start of the idle period.This allows the oracle to repre-sent not only optimal prefetching and request batching,but also optimal spin down strategy as well.The energy value estimated for our oracle is therefore a strict lower bound on how much disk energy a given trace would require.2.2Best-Shifting PredictionTo demonstrate the effectiveness and feasibility of dynam-ically adapting a predictive disk power management mech-anism to the current workload,we present the Best Shifting policy.Since we will demonstrate how different prefetching policies can perform best for different workloads,and that access patterns can vary substantially,it would be best to use an algorithm that would automatically switch to the best prefetching policy in response to the current workload.This is the basis of our Best Shifting prefetching policy,which ad-justs which prefetching algorithm it uses based on which is likely to conserve the most energy for the current access pattern.The Best Shifting policy uses machine learning techniques to choose which policy out of six implemented component algorithms works best for the workload at that point in time. The Best Shifting policy dynamically chooses the best pol-icy,not based upon hit ratio performance,but rather,based on dynamically estimated energy savings for each compo-nent policy.The six component predictors we evaluate in comparison to our Best Shifting policy,are:Unmodified,Last Succes-sor[1,2],First Successor[1,2],Stability[1,2],FMOC[11], and EPCM[12].Each of these prefetching policies use past access events to predict future accesses.The unmodified policy simply leaves the original workload unchanged,while the Successor and Stability predictors are based on simple pair-wise associations.The FMOC and EPCM predictors are based on data compression and context modeling.To keep track of the performance of the different pre-dictors,each policy has its own virtual cache,containing the data it would have in the cache if it were the system’s prefetching policy.The virtual cache then allows us to derive which data accesses would cause disk activity for the differ-ent policies.Each policy then stores these disk accesses in its own Disk Access Window,which is a snapshot of all the disk accesses a given policy would have created in the past N sec-onds had it been the system’s prefetching policy.From this window,we can directly estimate potential energy consump-A B C D B AB AC BD CB D123456789101112idleidleidle (a)No prefetching AB CD BE B AD CE B123456789101112idle idle idle (b)Perfect greedy prefetchingAB CD B AD CB D123456789101112idle idleDB (c)Optimal PrefetchingFigure 1:The initial request pattern with even spaces between each request and the corresponding disk accesses for:no prefetching,greedy prefetching,and optimal prefetching.tion.Virtual caches were used previously by Ari et al.[3]and Gramercy et al.[8].Our use of virtual caches differs in that we evaluate each policy’s performance based on the es-timated energy cost of using that policy,and not the simple hit ratios that it would have achieved.Best Shifting uses the virtual caches to determine the disk accesses each policy would have created had it been the system’s policy.Then,it periodically calculates the idle durations and estimates the energy used by each policy.The policy with the lowest esti-mated energy usage is adopted as the policy for the cache.This selection is actually based on a relative weighting of the components,and the use of a machine-learning algorithm to dynamically adjust these weights.When Best Shifting ’s policy changes,there are two strate-gies that we can employ to realize this change.First we can simply change the policy without affecting the contents of the cache.This changes the way future predicted data will be prefetched,though it does nothing to the data already in the cache.The second strategy is to “roll over”the cache.This is the process of synchronizing the virtual cache of the winning policy with the actual cache.This operation,how-ever,can take many disk accesses to perform,and so is only to be attempted if the disk is in the active state.If the cache policy changes while the disk is down,roll-over does not take place.If the disk is already active,then we can fetch the data we need to synchronize the cache in the background without causing an unnecessary disk spin-up.Roll-over can quickly help cache performance after the policy is switched due to a workload change that favors the new policy.3.EXPERIMENTAL RESULTSTo test Best Shifting ,our dynamic prefetching policy,we used file system traces from a varied selection of sources.A cache emulator is used to model the system cache while keeping track of the demand-fetched and prefetched files,along with cache statistics.If a trace entry asks for a filethat is not in the cache,a disk request is created.Our cache emulator records the timing of file accesses that result in cache misses and require physical disk activity.This output is then used as the input to a spin-down algorithm,and disk energy usage can then be calculated.For our tests,we used a cache emulator with a typical 30second write-buffer time-out.The output for this emulator was then run through a dynamic spin-down algorithm,implemented as described by Helmbold et al.[10].The difference among policies was the prefetching algo-rithms,which were used to predict and prefetch possible future data requests.The prefetched files are then placed in the system cache,alongside normal demand fetched files.The cache then uses LRU to decide which files should be evicted.Thus prefetching incorrect files can adversely affect the performance of the cache and reduce the length of idle periods.That is why it is important to use prefetching poli-cies that are accurate and effective.We have implemented six different prefetching policies.Each also uses an under-lying LRU cache eviction algorithm while predicting and prefetching files.Different prefetching policies can be seen to work better for different workloads,and even for the same workload ob-served over different days.Figures 3(a)and 3(b)show how different prefetching algorithms work better when using day long traces from instructional computers at the University of California,Berkeley [16].Figure 2shows the difference in performance for workloads observed on a Windows PC at the University of California,Santa Cruz during early 2005.The energy usage presented in these figures is depicted as a ratio of the estimated energy used by the given policy against the ideal energy usage of the oracle,which has the benefits of perfect prescience and perfect (instant and in-fallibly judicious)spin-down decisions.Both sets of traces demonstrate that a single prefetching policy does not al-ways perform best.This is typical of practically all tracesFigure2:The energy usage for host Periodot at the Uni-versity of California,Santa Cruz on February14,2005.(a)Day1(b)Day2Figure3:The energy usage for host INS#23at the University of California,Berekely on October3and4, 1996.we have observed,and suggests that it would be best to use management algorithms which automatically switch to the best policy in response to the current workload.The Best Shifting policy,which aims to do just that,can be seen to consistently offer the best performance compared against all otherfixed and non-adaptive policies.4.RELATED RESEARCHEffectively spinning down the disk drive can save valuable energy in a mobile environment[13,7].We have used a dy-namic spin-down algorithm which adjust the timeout value based on past disk request history.Helmbold et al.[10]de-scribed another dynamic spin-down timeout algorithm that employed a machine learning algorithm to adjust the time-out.Bisson and Brandt demonstrated the practicality of implementing such an algorithm[4].One of the earliest proposals for the incorporation of prediction to dynamically save energy in storage systems was offered by Wilkes[19]. The nature of the access workload,and its interaction with the underlying cache and disk,is crucial for for effective disk power management.Zhu et al.[20]showed that simply min-imizing cache misses does not necessarily result in the min-imum energy usage for a given cache replacement policy. They proposed four different power-aware caching policies that can save up to16%disk energy over a traditional LRU cache policy.Creating busier burst periods and longer idle periods allows the disk to be spun down for longer periods of time.The exploitation and promotion of such bursty be-havior has been explicitly attempted by Weisel et al.[18], and Papathanasiou and Scott[15].They found that tra-ditional OS resource management policies tend to“smooth out”these burst and idle periods.The Milly Watt Project[5] contended that because application needs are the driving force behind power management strategies,it is useful to propagate energy efficiency information to the application. Nobel’s implementation,Odyssey[14],showed a factor of five increase in performance over three different benchmarks. More specifically,Flinn et al.[6]showed that collaboration between the operation system and applications can be used to achieve longer battery life and less energy consumption. Their implementation in the Linux kernel monitored energy supply and demand to select a tradeoffbetween energy con-servation and performance.Others have also used dynamic collaboration between the operating system and applications to increase energy efficiency and performance[18,17]. 5.CONCLUSION&FUTURE RESEARCH Increasing system longevity,and increasing the overall re-liability a system is an important goal when providing secure and available storage.Resources can be wasted,and avail-ability threatened,by unforeseen changes in workload,as well as potentially deliberate problem workloads.One as-pect of increasing availability in the face of such a threat is workload reshaping with the goal of reducing disk en-ergy consumption.By creating increasingly bursty disk ac-cess patterns and longer disk idle periods through prediction and prefetching,such active workload reshaping can increase disk energy savings.But while some predictors are better than others,there is rarely a universal single choice of al-gorithm that is best for all possible workloads.This is par-ticularly true if the nature of workload change is a result of deliberate attempts to produce a problematic,though light,workload.In such a situation,a dynamically self-optimizing algorithm,such as our Best Shifting prefetcher,has the po-tential to leverage the best strategy regardless of the encoun-tered request patterns.This particular prefetcher,when combined with a dynamic spin-down mechanism,results in the use of15%to35%less energy than traditional predictive prefetching and spin-down policies.Here we have focused solely on prefetchers and disk en-ergy,but we aim to investigate the benefits of dynamic adap-tation for other subsystems,and in the face of more deliber-ate adversaries.While we have shown that by dymanically selecting the prefetching strategy we can lengthen disk idle periods and save energy,our oracle results also show that there is possibly more energy yet to be saved.By imple-menting different prefetching strategies,with the goal not only to make more accurate predictions but also to create busier burst periods,we aim to come closer to optimal disk energy savings,regardless of the workload our algorithms may encounter.6.ACKNOWLEDGEMENTSWe are grateful to all the members of the Storage Sys-tems Research Center who are a constant source of stim-ulating comments,especially Prof.David Helmbold and to our sponsors(Engenio,Hewlett-Packard Laboratories,Hi-tachi Global Storage Technologies,IBM Research,Intel Re-search,Microsoft Research,Network Appliance and Veri-tas)for theirfinancial support.We are also grateful to our colleagues at the University of Pittsburgh’s Department of Computer Science,especially members of the Storage Re-search Group,and particularly Prof.Panos K.Chrysanthis, of the Advanced Data Management Technologies Labora-tory,for valuable discussions.Jeffrey Rybczynski wishes to express his special thanks to Prof.Patrick Mantey for his support during the author’s graduate studies,and for his insightful comments on this research.7.REFERENCES[1]Amer,A.,and Long,D.D.E.Noah:Low-costfileaccess prediction through pairs.In Proceedings of the20th IEEE International Performance,Computing and Communications Conference(IPCCC’01)(Apr.2001), IEEE,pp.27–33.[2]Amer,A.,Long,D.D.E.,Pˆa ris,J.-F.,and Burns,R.C.File access prediction with adjustable accuracy.In Proceedings of the International PerformanceConference on Computers and Communication(IPCCC ’02)(Phoenix,Apr.2002),IEEE.[3]Ari,I.,Amer,A.,Gramacy,R.,Miller,E.L.,Brandt,S.A.,and Long,D.D.E.ACME:adaptive caching using multiple experts.In Proceedings inInformatics(2002),vol.14,Carleton Scientific,pp.143–158.[4]Bisson,T.,and Brandt,S.A.Adaptive diskspin-down algorithms in practice.In Proceedings of the 2004Conference on File and Storage Technologies(FAST)(2004).[5]Ellis,C.S.The case for higher-level powermanagement.In HOTOS’99:Proceedings of theSeventh Workshop on Hot Topics in Operating Systems (Washington,DC,USA,1999),IEEE ComputerSociety,p.162.[6]Flinn,J.,and Satyanarayanan,M.Energy-awareadaptation for mobile applications.In Symposium onOperating Systems Principles(1999),pp.48–63.[7]Golding,R.,Bosch,P.,Staelin,C.,Sullivan,T.,and Wilkes,J.Idleness is not sloth.In Proceedings of the Winter1995USENIX Technical Conference(New Orleans,LA,Jan.1995),USENIX,pp.201–212.[8]Gramacy,R.B.,Warmuth,M.K.,Brandt,S.A.,and Ari,I.Adaptive caching by refetching.InAdvances in Neural Information Processing Systems15 (2003),MIT Press,pp.1465–1472.[9]Greenawalt,P.M.Modeling power management forhard disks.In Proceedings of the2nd InternationalSymposium on Modeling,Analysis,and Simulation of Computer and Telecommunication Systems(MASCOTS ’94)(Durham,NC,Jan.1994),IEEE,pp.62–66. [10]Helmbold,D.P.,Long,D.D.E.,Sconyers,T.L.,and Sherrod,B.Adaptive disk spin-down for mobile computers.ACM/Baltzer Mobile Networks and Applications(MONET)5,4(2000),285–297.[11]Kroeger,T.M.,and Long,D.D.E.The case forefficientfile access pattern modeling.In Proceedings of the7th IEEE Workshop on Hot Topics in OperatingSystems(HotOS-VII)(Rio Rico,Arizona,Mar.1999), pp.14–19.[12]Kroeger,T.M.,and Long,D.D.E.Design andimplementation of a predictivefile prefetchingalgorithm.In Proceedings of the2001USENIX Annual Technical Conference(Boston,Jan.2001),pp.105–118.[13]Li,K.,Kumpf,R.,Horton,P.,and Anderson,T.A quantitative analysis of disk drive powermanagement in portable computers.In Proceedings of the Winter1994USENIX Technical Conference(SanFrancisco,CA,Jan.1994),pp.279–291.[14]Noble,B.D.,Satyanarayanan,M.,Narayanan,D.,Tilton,J.E.,Flinn,J.,and Walker,K.R.Agile application-aware adaptation for mobility.InSixteen ACM Symposium on Operating SystemsPrinciples(Saint Malo,France,1997),pp.276–287. [15]Papathanasiou,A.E.,and Scott,M.L.Energyefficient prefetching and caching.In Proceedings of the 2004USENIX Annual Technical Conference(Boston, MA,June2004),pp.255–268.[16]Roselli,D.,and Anderson,T.E.Characteristicsoffile system workloads.Research report,University of California,Berkeley,June1996.[17]Vahdat,A.,Lebeck,A.,and Ellis,C.Every jouleis precious:The case for revisiting operating systemdesign for energy efficiency,Sept.2000.[18]Weissel,A.,Beutel,B.,and Bellosa,F.Cooperative I/O:A novel I/O semantics forenergy-aware applications.SIGOPS Oper.Syst.Rev.36,SI(2002),117–129.[19]Wilkes,J.Predictive power conservation.TechnicalReport HPL-CSP-92-5,Hewlett-Packard Laboratories, Feb.1992.[20]Zhu,Q.,David,F.M.,Devaraj,C.F.,Li,Z.,andZhou,Y.Reducing energy consumption of disk storage using power-aware cache management.In10thInternational Symposium on High PerformanceComputer Architecture(HPCA’04)(Madrid,Spain,Feb.2004),Cisco Systems Inc.,pp.118–129.。