Module lexer

Module lexer 

Source
Expand description

Turn text input into a sequence of tokens.

We perform two levels of lexing:

  • lower: handles comments, annotations, spaces and newlines. Generates a sequence of lower::Lexeme instances.
  • Lexer: pulls text from the lower-level lexer and recognizes tokens. Generates a sequence of Token instances.

ModulesΒ§

lower πŸ”’
A β€œpartial” lexer which determines whether we’re inside an RC-block or a comment.

StructsΒ§

GlyphRecognizer πŸ”’
Convert a string into a sequence of Elevated<&Glyph>.
GlyphTokenizer πŸ”’
Tokenize a small part of the input, unifying the Unicode and @...@ representations.
Lexer πŸ”’
Tokenize the body of a source code file.
NumericLiteral πŸ”’
SpannedIter πŸ”’

EnumsΒ§

ErrorTokenKind πŸ”’
Represents an item in the input we didn’t recognise of which we don’t support.
Token πŸ”’
The parser consumes these tokens.
TokenMergeResult πŸ”’

ConstantsΒ§

DOT_CHAR πŸ”’
DOT_STR πŸ”’

FunctionsΒ§

merge_tokens πŸ”’
tokenise_single_glyph πŸ”’

Type AliasesΒ§

Span πŸ”’