ID: 32390
Doc Title: review_720045084
Restaurant Name: Tony_s_Di_Napoli_Midtown
Title Review: Fantastic food!
Passage Text:
The best pasta in New York! Great dessert and friendly staff. A bit noisy on a Sunday evening but a really nice evening close to Times square.
Score: 0.9367155
---
ID: 52686
Doc Title: review_651849097
Restaurant Name: Carmine_s_Italian_Restaurant_Times_Square
Title Review: Wonderful
Passage Text:
The best pasta in New York. The only problem is the size of the plates. They must do smaller plates. For one person for example.
Score: 0.90883017
---
ID: 73133
Doc Title: review_628690226
Restaurant Name: Il_Gattopardo
Title Review: Excellence
Passage Text:
Perhaps the best pasta in NY. They can deliver pasta al dente, as they have done that for us in the past.
Score: 0.89915013
---
ID: 1460
Doc Title: review_609031069
Restaurant Name: San_Carlo_Osteria_Piemonte
Title Review: Good if you are not italian
Passage Text:
Nice food in New York if you are not Italian but if you know how Italian food really is you can cook better at your home.Pasta not good
Score: 0.88570404
---
ID: 149
Doc Title: review_695311754
Restaurant Name: San_Carlo_Osteria_Piemonte
Title Review: Outstanding food, great service and atmosphere
Passage Text:
I'm a huge fan of picolla cucina on Spring St and I still think they have the best pastas in New York. It's my favorite in NYC, but a block away is San Carlo which may bemy second favorite. It is slightly different in terms of the menu, with less focus on pasta. It also has a slightly larger footprint with a small intimate bar, and has a very good wine and cocktail list.
Score: 0.8833201
---
ID: 70098
Doc Title: review_417272677
Restaurant Name: Forlini_s_Restaurant
Title Review: Buenísimo!!!
Passage Text:
Best pasta and minestrone soup ever, we been looking around in little Italy New york for a good Italian restaurant, I consult trip advisor. Found the place and was a delightful surprise. Jack where our hostess very kind and funny man. Definitely we are going to come back soon during our trip here in NY.
Score: 0.8831816
---
ID: 87324
Doc Title: review_241290115
Restaurant Name: Carmine_s_Italian_Restaurant_Times_Square
Title Review: Real Italian food
Passage Text:
Best classic Italian food in NYC.
Score: 0.8803612
---
ID: 21092
Doc Title: review_629514788
Restaurant Name: IL_Melograno
Title Review: Tastefull meal - worth a visit!!
Passage Text:
Best meal we’ve had in NYC! The pasta was just delicious / super fresh & the staff very friendly and kind. We would recommend it for sure!
Score: 0.8786392
---
ID: 22079
Doc Title: review_375834633
Restaurant Name: Orso
Title Review: Always a crowd pleaser!
Passage Text:
Love this restaurant and still mourn the closing of the LA spot. The best pastas and a perfect place to have lunch that "feels" like NYC! Very traditional and located very near the theater district, so you can hop in for an early dinner pre-show as well. You really can't go wrong ordering everything on the menu but my last visit, I had them make me a simple pasta with tomatoes and basil.
Score: 0.8776474
---
ID: 69039
Doc Title: review_467680511
Restaurant Name: Forlini_s_Restaurant
Title Review: The best. The very best.
Passage Text:
If tradition, quality service, and first-rate homemade pasta is your desire, then look no more. This place is simply the best in NYC. I've been here several times after stumbling on it last year. Wish I had found it earlier in my career; it would have made many of my previous visits to NYC even more satisfying to the palate -- and wallet. Love the family atmosphere.
Score: 0.8749021
PUT_ingest/pipeline/chunker{"processors":[{"script":{"description":"Chunk body_content into sentences by looking for . followed by a space","lang":"painless","source":"""
String[] envSplit = /((?<!M(r|s|rs)\.)(?<=\.) |(?<=\!) |(?<=\?) )/.split(ctx['body_content']);
ctx['passages'] = new ArrayList();
int i = 0;
boolean remaining = true;
if (envSplit.length == 0) {
return
} else if (envSplit.length == 1) {
Map passage = ['text': envSplit[0]];ctx['passages'].add(passage)
} else {
while (remaining) {
Map passage = ['text': envSplit[i++]];
while (i < envSplit.length && passage.text.length() + envSplit[i].length() < params.model_limit) {passage.text = passage.text + ' ' + envSplit[i++]}
if (i == envSplit.length) {remaining = false}
ctx['passages'].add(passage)
}
}
""","params":{"model_limit":400}}},{"foreach":{"field":"passages","processor":{"inference":{"model_id":"cohere_embeddings","input_output":[{"input_field":"_ingest._value.text","output_field":"_ingest._value.vector.predicted_value"}],"on_failure":[{"append":{"field":"_source._ingest.inference_errors","value":[{"message":"Processor 'inference' in pipeline 'ml-inference-title-vector' failed with message ''","pipeline":"ml-inference-title-vector","timestamp":"}"}]}}]}}}}]}
插入小样本实验数据:
PUTchunker/_doc/1?pipeline=chunker{"title":"Adding passage vector search to Lucene","body_content":"Vector search is a powerful tool in the information retrieval tool box. Using vectors alongside lexical search like BM25 is quickly becoming commonplace. But there are still a few pain points within vector search that need to be addressed. A major one is text embedding models and handling larger text input. Where lexical search like BM25 is already designed for long documents, text embedding models are not. All embedding models have limitations on the number of tokens they can embed. So, for longer text input it must be chunked into passages shorter than the model’s limit. Now instead of having one document with all its metadata, you have multiple passages and embeddings. And if you want to preserve your metadata, it must be added to every new document. A way to address this is with Lucene's “join” functionality. This is an integral part of Elasticsearch’s nested field type. It makes it possible to have a top-level document with multiple nested documents, allowing you to search over nested documents and join back against their parent documents. This sounds perfect for multiple passages and vectors belonging to a single top-level document! This is all awesome! But, wait, Elasticsearch® doesn’t support vectors in nested fields. Why not, and what needs to change? The key issue is how Lucene can join back to the parent documents when searching child vector passages. Like with kNN pre-filtering versus post-filtering, when the joining occurs determines the result quality and quantity. If a user searches for the top four nearest parent documents (not passages) to a query vector, they usually expect four documents. But what if they are searching over child vector passages and all four of the nearest vectors are from the same parent document? This would end up returning just one parent document, which would be surprising. This same kind of issue occurs with post-filtering."}PUTchunker/_doc/3?pipeline=chunker{"title":"Automatic Byte Quantization in Lucene","body_content":"While HNSW is a powerful and flexible way to store and search vectors, it does require a significant amount of memory to run quickly. For example, querying 1MM float32 vectors of 768 dimensions requires roughly 1,000,000∗4∗(768+12)=3120000000≈31,000,000∗4∗(768+12)=3120000000bytes≈3GB of ram. Once you start searching a significant number of vectors, this gets expensive. One way to use around 75% less memory is through byte quantization. Lucene and consequently Elasticsearch has supported indexing byte vectors for some time, but building these vectors has been the user's responsibility. This is about to change, as we have introduced int8 scalar quantization in Lucene. All quantization techniques are considered lossy transformations of the raw data. Meaning some information is lost for the sake of space. For an in depth explanation of scalar quantization, see: Scalar Quantization 101. At a high level, scalar quantization is a lossy compression technique. Some simple math gives significant space savings with very little impact on recall. Those used to working with Elasticsearch may be familiar with these concepts already, but here is a quick overview of the distribution of documents for search. Each Elasticsearch index is composed of multiple shards. While each shard can only be assigned to a single node, multiple shards per index gives you compute parallelism across nodes. Each shard is composed as a single Lucene Index. A Lucene index consists of multiple read-only segments. During indexing, documents are buffered and periodically flushed into a read-only segment. When certain conditions are met, these segments can be merged in the background into a larger segment. All of this is configurable and has its own set of complexities. But, when we talk about segments and merging, we are talking about read-only Lucene segments and the automatic periodic merging of these segments. Here is a deeper dive into segment merging and design decisions."}PUTchunker/_doc/2?pipeline=chunker{"title":"Use a Japanese language NLP model in Elasticsearch to enable semantic searches","body_content":"Quickly finding necessary documents from among the large volume of internal documents and product information generated every day is an extremely important task in both work and daily life. However, if there is a high volume of documents to search through, it can be a time-consuming process even for computers to re-read all of the documents in real time and find the target file. That is what led to the appearance of Elasticsearch® and other search engine software. When a search engine is used, search index data is first created so that key search terms included in documents can be used to quickly find those documents. However, even if the user has a general idea of what type of information they are searching for, they may not be able to recall a suitable keyword or they may search for another expression that has the same meaning. Elasticsearch enables synonyms and similar terms to be defined to handle such situations, but in some cases it can be difficult to simply use a correspondence table to convert a search query into a more suitable one. To address this need, Elasticsearch 8.0 released the vector search feature, which searches by the semantic content of a phrase. Alongside that, we also have a blog series on how to use Elasticsearch to perform vector searches and other NLP tasks. However, up through the 8.8 release, it was not able to correctly analyze text in languages other than English. With the 8.9 release, Elastic added functionality for properly analyzing Japanese in text analysis processing. This functionality enables Elasticsearch to perform semantic searches like vector search on Japanese text, as well as natural language processing tasks such as sentiment analysis in Japanese. In this article, we will provide specific step-by-step instructions on how to use these features."}PUTchunker/_doc/5?pipeline=chunker{"title":"We can chunk whatever we want now basically to the limits of a document ingest","body_content":"""Chonk is an internet slang term used to describe overweight cats that grew popular in the late summer of 2018 after a photoshopped chart of cat body-fat indexes renamed the "Chonk" scale grew popular on Twitter and Reddit. Additionally, "OhLawdHeComin'," the final level of the Chonk Chart, was adopted as an online catchphrase used to describe large objects, animals or people. It is not to be confused with the Saturday Night Live sketch of the same name. The term "Chonk" was popularized in a photoshopped edit of a chart illustrating cat body-fat indexes and the risk of health problems for each class (original chart shown below). The first known post of the "Chonk" photoshop, which classifies each cat to a certain level of "chonk"-ness ranging from "Afineboi" to "OHLAWDHECOMIN," was posted to Facebook group THIS CAT IS C H O N K Y on August 2nd, 2018 by Emilie Chang (shown below). The chart surged in popularity after it was tweeted by @dreamlandtea[1] on August 10th, 2018, gaining over 37,000 retweets and 94,000 likes (shown below). After the chart was posted there, it began growing popular on Reddit. It was reposted to /r/Delighfullychubby[2] on August 13th, 2018, and /r/fatcats on August 16th.[3] Additionally, cats were shared with variations on the phrase "Chonk." In @dreamlandtea's Twitter thread, she rated several cats on the Chonk scale (example, shown below, left). On /r/tumblr, a screenshot of a post featuring a "goodluckcat" titled "LuckyChonk" gained over 27,000 points (shown below, right). The popularity of the phrase led to the creation of a subreddit, /r/chonkers,[4] that gained nearly 400 subscribers in less than a month. Some photoshops of the chonk chart also spread on Reddit. For example, an edit showing various versions of Pikachu on the chart posted to /r/me_irl gained over 1,200 points (shown below, left). The chart gained further popularity when it was posted to /r/pics[5] September 29th, 2018."""}
GETchunker/_search{"_source":false,"fields":["title"],"knn":{"inner_hits":{"_source":false,"fields":["passages.text"]},"field":"passages.vector.predicted_value","k":2,"num_candidates":100,"query_vector_builder":{"text_embedding":{"model_id":"cohere_embeddings","model_text":"Can I use multiple vectors per document now?"}}}}
搜索结果:
{"took":285,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":2,"relation":"eq"},"max_score":0.78212404,"hits":[{"_index":"chunker","_id":"1","_score":0.78212404,"_ignored":["body_content.keyword","passages.text.keyword"],"fields":{"title":["Adding passage vector search to Lucene"]},"inner_hits":{"passages":{"hits":{"total":{"value":6,"relation":"eq"},"max_score":0.78212404,"hits":[{"_index":"chunker","_id":"1","_nested":{"field":"passages","offset":3},"_score":0.78212404,"fields":{"passages":[{"text":["This sounds perfect for multiple passages and vectors belonging to a single top-level document! This is all awesome! But, wait, Elasticsearch® doesn’t support vectors in nested fields. Why not, and what needs to change? The key issue is how Lucene can join back to the parent documents when searching child vector passages."]}]}},{"_index":"chunker","_id":"1","_nested":{"field":"passages","offset":0},"_score":0.7376485,"fields":{"passages":[{"text":["Vector search is a powerful tool in the information retrieval tool box. Using vectors alongside lexical search like BM25 is quickly becoming commonplace. But there are still a few pain points within vector search that need to be addressed. A major one is text embedding models and handling larger text input."]}]}},{"_index":"chunker","_id":"1","_nested":{"field":"passages","offset":4},"_score":0.7086177,"fields":{"passages":[{"text":["Like with kNN pre-filtering versus post-filtering, when the joining occurs determines the result quality and quantity. If a user searches for the top four nearest parent documents (not passages) to a query vector, they usually expect four documents. But what if they are searching over child vector passages and all four of the nearest vectors are from the same parent document?"]}]}}]}}}},{"_index":"chunker","_id":"2","_score":0.704564,"_ignored":["body_content.keyword","passages.text.keyword"],"fields":{"title":["Use a Japanese language NLP model in Elasticsearch to enable semantic searches"]},"inner_hits":{"passages":{"hits":{"total":{"value":6,"relation":"eq"},"max_score":0.704564,"hits":[{"_index":"chunker","_id":"2","_nested":{"field":"passages","offset":3},"_score":0.704564,"fields":{"passages":[{"text":["Elasticsearch enables synonyms and similar terms to be defined to handle such situations, but in some cases it can be difficult to simply use a correspondence table to convert a search query into a more suitable one. To address this need, Elasticsearch 8.0 released the vector search feature, which searches by the semantic content of a phrase."]}]}},{"_index":"chunker","_id":"2","_nested":{"field":"passages","offset":4},"_score":0.6868271,"fields":{"passages":[{"text":["Alongside that, we also have a blog series on how to use Elasticsearch to perform vector searches and other NLP tasks. However, up through the 8.8 release, it was not able to correctly analyze text in languages other than English. With the 8.9 release, Elastic added functionality for properly analyzing Japanese in text analysis processing."]}]}},{"_index":"chunker","_id":"2","_nested":{"field":"passages","offset":0},"_score":0.6548239,"fields":{"passages":[{"text":["Quickly finding necessary documents from among the large volume of internal documents and product information generated every day is an extremely important task in both work and daily life. However, if there is a high volume of documents to search through, it can be a time-consuming process even for computers to re-read all of the documents in real time and find the target file."]}]}}]}}}}]}}
可见文档 1 的相关性有 0.78 ,的确远比相关性只有 0.7 的文档 2 更贴近我们的问题。
我们换一个搜索关键字, scalar quantization ,搜索结果如下:
{"took":424,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":2,"relation":"eq"},"max_score":0.8421624,"hits":[{"_index":"chunker","_id":"3","_score":0.8421624,"_ignored":["body_content.keyword","passages.text.keyword"],"fields":{"title":["Automatic Byte Quantization in Lucene"]},"inner_hits":{"passages":{"hits":{"total":{"value":6,"relation":"eq"},"max_score":0.8421624,"hits":[{"_index":"chunker","_id":"3","_nested":{"field":"passages","offset":2},"_score":0.8421624,"fields":{"passages":[{"text":["Meaning some information is lost for the sake of space. For an in depth explanation of scalar quantization, see: Scalar Quantization 101. At a high level, scalar quantization is a lossy compression technique. Some simple math gives significant space savings with very little impact on recall."]}]}},{"_index":"chunker","_id":"3","_nested":{"field":"passages","offset":1},"_score":0.7212088,"fields":{"passages":[{"text":["One way to use around 75% less memory is through byte quantization. Lucene and consequently Elasticsearch has supported indexing byte vectors for some time, but building these vectors has been the user's responsibility. This is about to change, as we have introduced int8 scalar quantization in Lucene. All quantization techniques are considered lossy transformations of the raw data."]}]}},{"_index":"chunker","_id":"3","_nested":{"field":"passages","offset":0},"_score":0.62174904,"fields":{"passages":[{"text":["While HNSW is a powerful and flexible way to store and search vectors, it does require a significant amount of memory to run quickly. For example, querying 1MM float32 vectors of 768 dimensions requires roughly 1,000,000∗4∗(768+12)=3120000000≈31,000,000∗4∗(768+12)=3120000000bytes≈3GB of ram. Once you start searching a significant number of vectors, this gets expensive."]}]}}]}}}},{"_index":"chunker","_id":"1","_score":0.60923594,"_ignored":["body_content.keyword","passages.text.keyword"],"fields":{"title":["Adding passage vector search to Lucene"]},"inner_hits":{"passages":{"hits":{"total":{"value":6,"relation":"eq"},"max_score":0.60923594,"hits":[{"_index":"chunker","_id":"1","_nested":{"field":"passages","offset":3},"_score":0.60923594,"fields":{"passages":[{"text":["This sounds perfect for multiple passages and vectors belonging to a single top-level document! This is all awesome! But, wait, Elasticsearch® doesn’t support vectors in nested fields. Why not, and what needs to change? The key issue is how Lucene can join back to the parent documents when searching child vector passages."]}]}},{"_index":"chunker","_id":"1","_nested":{"field":"passages","offset":0},"_score":0.59735155,"fields":{"passages":[{"text":["Vector search is a powerful tool in the information retrieval tool box. Using vectors alongside lexical search like BM25 is quickly becoming commonplace. But there are still a few pain points within vector search that need to be addressed. A major one is text embedding models and handling larger text input."]}]}},{"_index":"chunker","_id":"1","_nested":{"field":"passages","offset":2},"_score":0.59269404,"fields":{"passages":[{"text":["And if you want to preserve your metadata, it must be added to every new document. A way to address this is with Lucene's “join” functionality. This is an integral part of Elasticsearch’s nested field type. It makes it possible to have a top-level document with multiple nested documents, allowing you to search over nested documents and join back against their parent documents."]}]}}]}}}}]}}
这次则是 0.84 对比 0.6 ,相关性差别十分显著。
下文预告
我们在本节中使用了 ES 的 ingest pipeline 引入长文数据,测试了针对长文的语义搜索功能。
工程实现效果是令人满意的。
[Unit]Description=Frp Server ServiceAfter=network.target[Service]Type=simpleDynamicUser=yesRestart=on-failureRestartSec=5sExecStart=/usr/bin/frps -c /etc/frp/frps.tomlLimitNOFILE=1048576[Install]WantedBy=multi-user.target