void whitespaceChars(int low, int hi)
描述 (Description)
java.io.StreamTokenizer.whitespaceChars(int low, int hi)方法指定低“= c”=高范围内的所有字符c都是空格字符。 空格字符仅用于分隔输入流中的标记。 清除指定范围内字符的任何其他属性设置。
声明 (Declaration)
以下是java.io.StreamTokenizer.whitespaceChars()方法的声明。
public void whitespaceChars(int low, int hi)
参数 (Parameters)
low - 范围的低端。
high - 范围的高端。
返回值 (Return Value)
此方法不返回值。
异常 (Exception)
NA
例子 (Example)
以下示例显示了java.io.StreamTokenizer.whitespaceChars()方法的用法。
package com.iowiki;
import java.io.*;
public class StreamTokenizerDemo {
public static void main(String[] args) {
String text = "Hello. This is a text \n that will be split "
+ "into tokens. 1 + 1 = 2";
try {
// create a new file with an ObjectOutputStream
FileOutputStream out = new FileOutputStream("test.txt");
ObjectOutputStream oout = new ObjectOutputStream(out);
// write something in the file
oout.writeUTF(text);
oout.flush();
// create an ObjectInputStream for the file we created before
ObjectInputStream ois = new ObjectInputStream(new FileInputStream("test.txt"));
// create a new tokenizer
Reader r = new BufferedReader(new InputStreamReader(ois));
StreamTokenizer st = new StreamTokenizer(r);
// set letters o- t as white space chars
st.whitespaceChars('o', 't');
// print the stream tokens
boolean eof = false;
do {
int token = st.nextToken();
switch (token) {
case StreamTokenizer.TT_EOF:
System.out.println("End of File encountered.");
eof = true;
break;
case StreamTokenizer.TT_EOL:
System.out.println("End of Line encountered.");
break;
case StreamTokenizer.TT_WORD:
System.out.println("Word: " + st.sval);
break;
case StreamTokenizer.TT_NUMBER:
System.out.println("Number: " + st.nval);
break;
default:
System.out.println((char) token + " encountered.");
if (token == '!') {
eof = true;
}
}
} while (!eof);
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
让我们编译并运行上面的程序,这将产生以下结果 -
Word: AHell
Number: 0.0
Word: Thi
Word: i
Word: a
Word: ex
Word: ha
Word: will
Word: be
Word: li
Word: in
Word: ken
Number: 0.0
Number: 1.0
+ encountered.
Number: 1.0
= encountered.
Number: 2.0
End of File encountered.