Ngram Analyzer
access in search queries: ngram
Starting point for exact substring matches was ngram tokenizing, which indexes all the substrings up to length n. The drawback of ngram tokenizing is a large amount of disk space used.
Best practice:
Settings allow to define min and max grams created on indexing and token_chars, which are characters classes to keep in the tokens, Elasticsearch splits on characters that don't belong to any of these classes.
Example: Search "Raven"
Example: Search "Pegasus"
Examples are in Elastic Search Documentation:
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-ngram-tokenizer.html