Tokenizer
class Tokenizer
Constructors
Name | Description |
---|---|
constructor()
|
A tokenizer capable of splitting a raw text-format WASM file into its component tokens. From the docs: The character stream in the source text is divided, from left to right, into a sequence of tokens, as defined by the following grammar.
Tokens are formed from the input character stream according to the longest match rule. That is, the next token always consists of the longest possible sequence of characters that is recognized by the above lexical grammar. Tokens can be separated by white space, but except for strings, they cannot themselves contain whitespace. |
Methods
tokenize
fun tokenize(source: Reader, context: ParseContext?): List<Token>
Tokenizes source code from the provided source.
Parameters
Name | Description |
---|---|
source: Reader
|
|
context: ParseContext?
|
ReturnValue
Name | Description |
---|---|
List<Token>
|
tokenize
fun tokenize(source: String, context: ParseContext?): List<Token>
Tokenizes source code from the provided source.
Parameters
Name | Description |
---|---|
source: String
|
|
context: ParseContext?
|
ReturnValue
Name | Description |
---|---|
List<Token>
|
A tokenizer capable of splitting a raw text-format WASM file into its component tokens.
From the docs:
The character stream in the source text is divided, from left to right, into a sequence of tokens, as defined by the following grammar.
Tokens are formed from the input character stream according to the longest match rule. That is, the next token always consists of the longest possible sequence of characters that is recognized by the above lexical grammar. Tokens can be separated by white space, but except for strings, they cannot themselves contain whitespace.