Fixes typo

This commit is contained in:
Kelvin Lau 2019-01-15 17:41:19 -08:00
parent 787d3ed127
commit 4a42e6c582
1 changed files with 2 additions and 2 deletions

View File

@ -77,7 +77,7 @@ The reason `Tokenizer` doesn't simply *split* the string is to enable Splash to
What delimiters that `Tokenizer` should use to split the code up into segments is determined by the given language `Grammar`. The default implementation is called `SwiftGrammar`, which aims to mimic the behavior of the Swift compiler as close as possible without actually having to compile the code (which is what enables Splash to be so fast).
The decision to not simply hardcode `SwitftGrammar` across the code base was to [decouple the code using a protocol](https://www.swiftbysundell.com/posts/separation-of-concerns-using-protocols-in-swift) (in this case `Grammar`) to achieve a much more flexible solution. If the Swift grammar changes a lot in the future, we can always add a second implementation while still maintaining backward compatibility, and it also opens up the possibility of using Splash with languages other than Swift - since it doesn't make many (if any) hard assumptions about Swift itself (Objective-C support, anyone? 😉).
The decision to not simply hardcode `SwiftGrammar` across the code base was to [decouple the code using a protocol](https://www.swiftbysundell.com/posts/separation-of-concerns-using-protocols-in-swift) (in this case `Grammar`) to achieve a much more flexible solution. If the Swift grammar changes a lot in the future, we can always add a second implementation while still maintaining backward compatibility, and it also opens up the possibility of using Splash with languages other than Swift - since it doesn't make many (if any) hard assumptions about Swift itself (Objective-C support, anyone? 😉).
Apart from supplying `Tokenizer` with delimiters, the most important role of a `Grammar` implementation is to provide an array of `SyntaxRule` implementations. When `SyntaxHighlighter` iterates through the segments that its `Tokenizer` gave it, it applies the syntax rules from its `Grammar` to each one of them to figure out each token's type. Each rule is asked if it matches a given segment, and as soon as a match is found that rule's `TokenType` is used to determine the type of that token.