Input format
Lexer
CHAR → [U+0000-U+D7FF U+E000-U+10FFFF] // a Unicode scalar value
ASCII → [U+0000-U+007F]
NUL → U+0000
This chapter describes how a source file is interpreted as a sequence of tokens.
See Crates and source files for a description of how programs are organised into files.
Source encoding
Each source file is interpreted as a sequence of Unicode characters encoded in UTF-8.
It is an error if the file is not valid UTF-8.
Byte order mark removal
If the first character in the sequence is U+FEFF (BYTE ORDER MARK), it is removed.
CRLF normalization
Each pair of characters U+000D (CR) immediately followed by U+000A (LF) is replaced by a single U+000A (LF). This happens once, not repeatedly, so after the normalization, there can still exist U+000D (CR) immediately followed by U+000A (LF) in the input (e.g. if the raw input contained “CR CR LF LF”).
Other occurrences of the character U+000D (CR) are left in place (they are treated as whitespace).
Shebang removal
If a shebang is present, it is removed from the input sequence (and is therefore ignored).
Tokenization
The resulting sequence of characters is then converted into tokens as described in the remainder of this chapter.
Note
The standard library
include!macro applies the following transformations to the file it reads:
- Byte order mark removal.
- CRLF normalization.
- Shebang removal when invoked in an item context (as opposed to expression or statement contexts).
The
include_str!andinclude_bytes!macros do not apply these transformations.