...
Der Picturepark erlaubt eine Vielzahl von exakten, Fuzzy oder ersetzenden Suchen. Zu einer Übersicht über die erweiterte Suche mit Suchbeispielen gelangen Sie, wenn Sie hier klicken.Sie können auf das Cheat Sheet der erweiterten Suche mit den Beispielen unten zugreifen. Diese Suchanfragen funktionieren nur im “Advanced Mode”. Diese Suchabfragen erlauben es nach spezifischen Werten in spezifischen Feldern zu suchen auf spezifischen layern. Überprüfen Sie die individuelle syntax pro Feld.
Filter by label (Content by label) | ||||||
---|---|---|---|---|---|---|
|
Simple Search Analyzer
Include Page | ||||
---|---|---|---|---|
|
Expand | ||
---|---|---|
| ||
Simple Search Analyzeraccess in search queries: simple The simple search analyzer is a custom Picturepark implementation not using Elastic search defaults. The custom analyzer uses a regex:
If you want to test the simple search analyzer, you can check your terms in a regex tester to see the outcome.
|
Expand | ||
---|---|---|
| ||
No Diacritics Analyzeraccess in search queries: no-diacritics The no diacritics analyzer:
An example can be found in Elastic Search Documentation: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-asciifolding-tokenfilter.html |
Expand | ||
---|---|---|
| ||
Path Hierarchy Analyzeraccess in search queries: pathHierarchy The path hierarchy analyzer will:
You should only configure this analyzer if being used via API. The simple search in Picturepark escapes Special Characters, and therefore you won't find assets when searching for some of the tokens generated by this analyzer. |
Expand | ||
---|---|---|
| ||
Language Analyzeraccess in search queries: language There are several language analyzers available for elastic search. Language analyzers prevent stemming from language-specific values and language-specific stopwords. |
Expand | ||
---|---|---|
| ||
Ngram Analyzeraccess in search queries: ngram Starting point for exact substring matches was ngram tokenizing, which indexes all the substrings up to length n. The drawback of ngram tokenizing is a large amount of disk space used.
Settings allow to define min and max grams created on indexing and token_chars, which are characters classes to keep in the tokens, Elasticsearch splits on characters that don't belong to any of these classes.
Example: Search "Pegasus"
Examples are in Elastic Search Documentation: |
Expand | ||
---|---|---|
| ||
Edge NGram Analyzeraccess in search queries: edgeNGram This tokenizer is very similar to nGram but only keeps n-grams that start at the beginning of a token. Settings allow to define min and max grams created on indexing and token_chars, which are characters classes to keep in the tokens, Elasticsearch splits on characters that don't belong to any of these classes. Examples are in Elastic Search Documentation: |
Expand | ||
---|---|---|
| ||
Useful Links in ElasticSearch DocumentationSimple Analyzer: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-simple-analyzer.html No Diacritics: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-asciifolding-tokenfilter.html Path Hierarchy: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-pathhierarchy-tokenizer.html Language: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-lang-analyzer.html NGram: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-ngram-tokenizer.html EdgeNgram: https://www.elastic.co/guide/en/elasticsearch/reference/7.6/analysis-edgengram-tokenizer.html |
For advanced search queries on analyzed fields, the query can be adjusted to consider the analyzer.
Filter by label (Content by label) | ||||||
---|---|---|---|---|---|---|
|