Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Die Picturepark Suche bietet 3 Suchmodi, sowie Suchvorschläge aus den Listeneinstellungen.
Wann welcher Suchmodus zu verwenden ist, wird unten erklärt.

AND Suche

Die AND Suche sucht nach Inhalten, die alle eingegebenen Suchbegriffe enthalten. Wenn Sie zum Beispiel nach “Stock shot” suchen, übersetzt Picturepark dies in Stock AND shot und sucht nach Bildern, die diese beiden Werte enthalten.

OR Suche

Bei Verwendung der OR Suche übersetzt die Picturepark Suche den Suchbegriff “Stock Shot” in “Stock OR Shot” und sucht nach Inhalten, die einen oder mehrere eingegebene Suchbegriffe enthalten.

Erweiterte Suche

Der Picturepark erlaubt eine Vielzahl von exakten, Fuzzy oder ersetzenden Suchen. Sie können auf das Cheat Sheet der erweiterten Suche mit den Beispielen unten zugreifen. Diese Suchanfragen funktionieren nur im “Advanced Mode”. Diese Suchabfragen erlauben es nach spezifischen Werten in spezifischen Feldern zu suchen auf spezifischen layern. Überprüfen Sie die individuelle syntax pro Feld.

Simple Search Analyzer

Search Analyzers define how text is processed or manipulated. These analyzers give you control over how your text data is used in the search. The goal is to standardize text, for example lowercasing or converting special characters (diacrititcs) or handling of singular/plural in translations (e.g. men, man). Search Analyzers are available for string and translated string fields.

 Simple Search Analyzer

Simple Search Analyzer

(info) access in search queries: simple

The simple search analyzer is a custom Picturepark implementation not using Elastic search defaults. The custom analyzer uses a regex:

  • Regex

    */"(\[^\\p\{L\}\\d\]+)|(?<=\\D)(?=\\d)|(?<=\\d)(?=\\D)|(?<=\[\\p\{L\}&&\[^\\p\{Lu\}\]\])(?=\\p\{Lu\})|(?<=\\p\{Lu\})(?=\\p\{Lu\}\[\\p\{L\}&&\[^\\p\{Lu\}\]\])"/*
  • Outcome:

    • Lowercase / Uppercase

    • Digit / non-digit

    • Stemming

    • HTML Strip

  • Examples

    • Picturepark = Picturepark, picturepark

    • Case Study = Case, Study, case, study

If you want to test the simple search analyzer, you can check your terms in a regex tester to see the outcome.

  1. Open a regex checker

    1. https://regex101.com/

    2. https://regexr.com/

  2. Add your term as a test string

  3. Check the outcome

 No Diacritics Analyzer

No Diacritics Analyzer

(info) access in search queries: no-diacritics

The no diacritics analyzer:

  • only works for text fields

  • strip diacritic characters, so when the text value is: Kovačić Mateo you can search for “Kovačić Mateo” or “Kovacic Mateo”.

An example can be found in Elastic Search Documentation: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-asciifolding-tokenfilter.html

 Path Hierarchy Analyzer

Path Hierarchy Analyzer

(info) access in search queries: pathHierarchy

The path hierarchy analyzer will:

  • Take a path found in a field (picturepark\platform\manual) and delimit the individual terms

  • Example

    • picturepark\platform\manual = picturepark\platform\manual, picturepark\platform, manual

    • Products/Family/Industry = Products/Family, Products, Products/Family/Industry

You should only configure this analyzer if being used via API. The simple search in Picturepark escapes Special Characters, and therefore you won't find assets when searching for some of the tokens generated by this analyzer.
An example can be found in Elastic Search Documentation: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-pathhierarchy-tokenizer.html

 Language Analyzer

Language Analyzer

(info) access in search queries: language

There are several language analyzers available for elastic search. Language analyzers prevent stemming from language-specific values and language-specific stopwords.
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-lang-analyzer.html
The current implementation is using the default Elastic Search Language analyzers as listed in the link. We are using the default stop words and rules for stemming, without any custom adaption.

 Ngram Analyzer

Ngram Analyzer

(info) access in search queries: ngram

Starting point for exact substring matches was ngram tokenizing, which indexes all the substrings up to length n. The drawback of ngram tokenizing is a large amount of disk space used.
Best practice:

  • Use ngram only if required - use carefully and not for every string

Settings allow to define min and max grams created on indexing and token_chars, which are characters classes to keep in the tokens, Elasticsearch splits on characters that don't belong to any of these classes.
Example: Search "Raven"

  • NGrams (splits term into tokens with one character):

  • Rav

  • Rave

  • Raven

  • ave

  • aven

  • Ven

  • ...

Example: Search "Pegasus"

  • NGrams (splits term into tokens with one character):

  • Pegasus

  • Degas

Examples are in Elastic Search Documentation:
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-ngram-tokenizer.html

 Edge NGram Analyzer

Edge NGram Analyzer

(info) access in search queries: edgeNGram

This tokenizer is very similar to nGram but only keeps n-grams that start at the beginning of a token. Settings allow to define min and max grams created on indexing and token_chars, which are characters classes to keep in the tokens, Elasticsearch splits on characters that don't belong to any of these classes.

Examples are in Elastic Search Documentation:
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-edgengram-tokenizer.html

For advanced search queries on analyzed fields, the query can be adjusted to consider the analyzer. 

  • No labels