void eolIsSignificant(boolean flag)
描述 (Description)
java.io.StreamTokenizer.eolIsSignificant(boolean flag)方法确定行的末尾是否被视为标记。 如果flag参数为true,则此标记生成器将行尾视为标记; nextToken方法返回TT_EOL,并在读取行尾时将ttype字段设置为此值。
一行是以回车字符('\ r')或换行符('\ n')结尾的字符序列。 此外,紧接着换行符后面的回车字符被视为单行结束标记。
如果该标志为false,则将行尾字符视为空格并仅用于分隔标记。
声明 (Declaration)
以下是java.io.StreamTokenizer.eolIsSignificant()方法的声明。
public void eolIsSignificant(boolean flag)
参数 (Parameters)
flag - true表示行尾字符是单独的标记; false表示行尾字符是空格。
返回值 (Return Value)
此方法不返回值。
异常 (Exception)
NA
例子 (Example)
以下示例显示了java.io.StreamTokenizer.eolIsSignificant()方法的用法。
package com.iowiki;
import java.io.*;
public class StreamTokenizerDemo {
public static void main(String[] args) {
String text = "Hello. This is a text \n that will be split "
+ "into tokens. 1 + 1 = 2";
try {
// create a new file with an ObjectOutputStream
FileOutputStream out = new FileOutputStream("test.txt");
ObjectOutputStream oout = new ObjectOutputStream(out);
// write something in the file
oout.writeUTF(text);
oout.flush();
// create an ObjectInputStream for the file we created before
ObjectInputStream ois = new ObjectInputStream(new FileInputStream("test.txt"));
// create a new tokenizer
Reader r = new BufferedReader(new InputStreamReader(ois));
StreamTokenizer st = new StreamTokenizer(r);
// set that end of line is significant
st.eolIsSignificant(true);
// print the stream tokens
boolean eof = false;
do {
int token = st.nextToken();
switch (token) {
case StreamTokenizer.TT_EOF:
System.out.println("End of File encountered.");
eof = true;
break;
case StreamTokenizer.TT_EOL:
System.out.println("End of Line encountered.");
break;
case StreamTokenizer.TT_WORD:
System.out.println("Word: " + st.sval);
break;
case StreamTokenizer.TT_NUMBER:
System.out.println("Number: " + st.nval);
break;
default:
System.out.println((char) token + " encountered.");
if (token == '!') {
eof = true;
}
}
} while (!eof);
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
让我们编译并运行上面的程序,这将产生以下结果 -
Word: Hello.
Word: This
Word: is
Word: a
Word: text
End of Line encountered.
Word: that
Word: will
Word: be
Word: split
Word: into
Word: tokens.
Number: 1.0
+ encountered.
Number: 1.0
= encountered.
Number: 2.0
End of File encountered.