indexing – Flax http://www.flax.co.uk The Open Source Search Specialists Thu, 10 Oct 2019 09:03:26 +0000 en-GB hourly 1 https://wordpress.org/?v=4.9.8 Elasticsearch, Kibana and duplicate keys in JSON http://www.flax.co.uk/blog/2017/08/03/inconsistent-json-semantics-headache/ http://www.flax.co.uk/blog/2017/08/03/inconsistent-json-semantics-headache/#respond Thu, 03 Aug 2017 11:05:14 +0000 http://www.flax.co.uk/?p=3576 JSON has been the lingua franca of data exchange for many years. It’s human-readable, lightweight and widely supported. However, the JSON spec does not define what parsers should do when they encounter a duplicate key in an object, e.g.: { "foo": … More

The post Elasticsearch, Kibana and duplicate keys in JSON appeared first on Flax.

]]>
JSON has been the lingua franca of data exchange for many years. It’s human-readable, lightweight and widely supported. However, the JSON spec does not define what parsers should do when they encounter a duplicate key in an object, e.g.:

{
  "foo": "spam",
  "foo": "eggs",
  ...
}

Implementations are free to interpret this how they like. When different systems have different interpretations this can cause problems.

We recently encountered this in an Elasticsearch project. The customer reported unusual search behaviour around a boolean field called draft. In particular, documents which were thought to contain a true value for draft were being excluded by the query clause

{
  "query":
    "bool": {
      "must_not": {
        "term": { "draft": false }
      },
      ...

The version of Elasticsearch was 2.4.5 and we examined the index with Sense on Kibana 4.6.3. The documents in question did indeed appear to have the value

{
  "draft": true,
  ...
}

and therefore should not have been excluded by the must_not query clause.

To get to the bottom of it, we used Marple to examine the terms in the index. Under the bonnet, the boolean type is indexed as the term “T” for true and “F” for false. The documents which were behaving oddly had both “T” and “F” terms for the draft field, and were therefore being excluded by the must_not clause. But how did the extra “F” term get in there?

After some more experimentation we tracked it down to a bug in our indexer application, which under certain conditions was creating documents with duplicate draft keys:

{
  "draft": false,
  "draft": true
  ...
}

So why was this not appearing in the Sense output? It turns out that Elasticsearch and Sense/Kibana interpret duplicate keys in different ways. When we used curl instead of Sense we could see both draft items in the _source field. Elasticsearch was behaving consistently, storing and indexing both draft fields. However, Sense/Kibana was quietly dropping the first instance of the field and displaying only the second, true, value.

I’ve not looked at the Sense/Kibana source code, but I imagine this is just a consequence of being implemented in Javascript. I tested this in Chrome (59.0.3071.115 on macOS) with the following script:

<!DOCTYPE html>
<html>
  <head></head>
  <body>
    <script>
      var o = {
        s: "this is some text",
        b: true,
        b: false
      };

      console.log("value of o.b", o.b);
      console.log("value of o", JSON.stringify(o, "", 2));
    </script>
  </body>
</html>

which output (with no warnings)

value of o.b true
test.html:13 value of o {
 "s": "this is some text",
 "b": true
}

(in fact it turns out that order of b doesn’t matter, true always overrides false.)

Ultimately this wasn’t caused by any bugs in Elasticsearch, Kibana, Sense or Javascript, but the different way that duplicate JSON keys were being handled made finding the ultimate source of the problem harder than it needed to be. If you are using the Kibana console (or Sense with older versions) for Elasticsearch development then this might be a useful thing to be aware of.

I haven’t tested Solr’s handling of duplicate JSON keys yet but that would probably be an interesting exercise.

The post Elasticsearch, Kibana and duplicate keys in JSON appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2017/08/03/inconsistent-json-semantics-headache/feed/ 0
London Lucene/Solr Meetup – Introducing Marple & Solr Classification http://www.flax.co.uk/blog/2017/03/27/london-lucenesolr-meetup-introducing-marple-solr-classification/ http://www.flax.co.uk/blog/2017/03/27/london-lucenesolr-meetup-introducing-marple-solr-classification/#respond Mon, 27 Mar 2017 13:16:36 +0000 http://www.flax.co.uk/?p=3454 A small crowd for this month’s London Lucene/Solr Meetup, kindly hosted by Barclays in their sumptuous Canary Wharf offices. I introduced the Meetup and spoke briefly on how Flax is currently looking for team members (want to work on a … More

The post London Lucene/Solr Meetup – Introducing Marple & Solr Classification appeared first on Flax.

]]>
A small crowd for this month’s London Lucene/Solr Meetup, kindly hosted by Barclays in their sumptuous Canary Wharf offices. I introduced the Meetup and spoke briefly on how Flax is currently looking for team members (want to work on a variety of cutting-edge open source search projects in the UK and abroad? Get in touch!) before introducing Flax’s Alan Woodward who introduced our new Lucene index inspection tool, Marple.

Alan told us how Marple was conceived at the Lucene4IR event in Glasgow last year and how coding started at our Lucene Hackday in London. Although the well-known tool Luke allows one to dive deep into Lucene indexes, it hasn’t kept up with recent additions to Lucene index structures and we also wanted to build a tool with a RESTful API and separate GUI to allow it to be run easily on our client’s indexes in a read-only mode. Alan demonstrated Marple’s features including how it allows one to see the ‘hidden’ Lucene index fields that Elasticsearch creates. The first release of Marple is out and we’d welcome any feedback and contributions.

Next up was Alessandro Benedetti with an engaging talk about Solr’s built-in document classification features, useful for everything from spam filtering to automatic product categorisation. Unlike many classification methods, this uses the Lucene index itself as the training set – this index must contain some documents with manually assigned classification fields. Either K-Nearest-Neighbour and Naive Bayes algorithms can be used to perform the classification via Solr’s UpdateRequestProcessor chain, in Solr versions after 6.1. You can read more detail on Alessandro’s excellent blog.

We concluded with a brief Q&A session and then popped downstairs to a pub for some snacks and drinks. Thanks to both our speakers, our hosts and all who came – we’ll return in a couple of months with talks that will include René Kriegler on his neat Querqy query processor.

The post London Lucene/Solr Meetup – Introducing Marple & Solr Classification appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2017/03/27/london-lucenesolr-meetup-introducing-marple-solr-classification/feed/ 0
Working with Hadoop, Kafka, Samza and the wider Big Data ecosystem http://www.flax.co.uk/blog/2016/03/03/working-hadoop-kafka-samza-wider-big-data-ecosystem/ http://www.flax.co.uk/blog/2016/03/03/working-hadoop-kafka-samza-wider-big-data-ecosystem/#comments Thu, 03 Mar 2016 10:01:00 +0000 http://www.flax.co.uk/?p=3055 We’ve been working on a number of projects recently involving open source software often quoted as ‘Big Data’ solutions – here’s a quick overview of them. The grandfather of them all of course is Apache Hadoop, now not so much … More

The post Working with Hadoop, Kafka, Samza and the wider Big Data ecosystem appeared first on Flax.

]]>
We’ve been working on a number of projects recently involving open source software often quoted as ‘Big Data’ solutions – here’s a quick overview of them.

The grandfather of them all of course is Apache Hadoop, now not so much a single project as an ecosystem including storage and processing for potentially huge amounts of data, spread across clusters of machines. Interestingly Hadoop was originally created by Doug Cutting, who also wrote Lucene (the search library used by Apache Solr and Elasticsearch) and the Nutch web crawler. We’ve been helping clients distribute processing tasks using Hadoop’s MapReduce algorithm and also to speed up their indexing from Hadoop into Elasticsearch. Other projects we’ve used in the Hadoop ecosystem include Apache Zookeeper (used to coordinate lots of Solr servers into a distributed SolrCloud) and Apache Spark (for distributed processing).

We’re increasingly using Apache Kafka (a message broker) for handling large volumes of streaming data, for example log files. Kafka provides persistent storage of these streams, which might be ingested and pre-processed using Logstash and then indexed with Elasticsearch and visualised with Kibana to build high-performance monitoring systems. Throughput of thousands of items a second is not uncommon and these open source systems can easily match the performance of proprietary monitoring engines such as Splunk at a far lower cost. Apache Samza, a stream processing framework, is based on Kafka and we’ve built a powerful full-text search for streams system using it. Note that Elasticsearch has a similar ‘stored search’ feature called Percolator, but this is quite a lot slower (as others have confirmed).

Most of the above systems are written in Java, and if not run on the Java Virtual Machine (JVM), so our experience building large, performant and resilient systems on this platform has been invaluable. We’ll be writing in more detail about these projects soon. I’ve always said that search experts have been dealing with Big Data since well before it gained popularity as a concept – so if you’re serious about Big Data, ask us how we could help!

The post Working with Hadoop, Kafka, Samza and the wider Big Data ecosystem appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2016/03/03/working-hadoop-kafka-samza-wider-big-data-ecosystem/feed/ 2
Better search for life sciences at the BioSolr Workshop, day 2 – Elasticsearch & others http://www.flax.co.uk/blog/2016/02/15/better-search-life-sciences-biosolr-workshop-day-2-elasticsearch-others/ http://www.flax.co.uk/blog/2016/02/15/better-search-life-sciences-biosolr-workshop-day-2-elasticsearch-others/#respond Mon, 15 Feb 2016 11:32:13 +0000 http://www.flax.co.uk/?p=3017 Over the last 18 months we’ve been working closely with the European Bioinformatics Institute on a project to improve their use of open source search engines, funded by the BBSRC. The project was originally named BioSolr but has since grown … More

The post Better search for life sciences at the BioSolr Workshop, day 2 – Elasticsearch & others appeared first on Flax.

]]>
Over the last 18 months we’ve been working closely with the European Bioinformatics Institute on a project to improve their use of open source search engines, funded by the BBSRC. The project was originally named BioSolr but has since grown to encompass Elasticsearch. Last week we held a two-day workshop on the Wellcome Genome Campus near Cambridge to showcase our achievements and hear from others working in the same field, focused on Solr on the first day and Elasticsearch and other solutions on the second. Attendees included both bioinformaticians and search experts, as the project has very much been about collaboration and learning from each other.Read about the first day here.

The second day started with Eric Pugh’s second talk on The (Unofficial) State of Elasticsearch, bringing us all up to date on the meteoric rise of this technology and the opportunities it opens up especially in analytics and visualisation. Eric foresees Elastisearch continuing to specialise in this area, with Solr sticking closer to its roots in information retrieval. Giovanni Tumarello followed with a fast-paced demonstration of Kibi, a platform built on Elasticsearch and Kibana. Kibi allows one to very quickly join, visualise and explore different data sets and I was impressed with the range of potential applications including in the life sciences.

Evan Bolton of the US-based NCBI was next, talking about the massive PubChem dataset (80 million unique chemical structures, 200 million chemical substance descriptions, and 230 million biological activities, all heavily crosslinked). Although both Solr and CLucene had been considered, they eventually settled on the Sphinx engine with its great support for SQL queries and JOINs, although Evan admitted this was not a cloud-friendly solution. His team are now considering knowledge graphs and how to present up to 100 billion RDF triples. Andrea Pierleoni of the Centre for Therapeutic Target Validation then talked about an Elasticsearch cluster he has developed to index ‘evidence strings’ (which relate targets to diseases using evidence). This is a relatively small collection of 2.1 million association objects, pre-processed using Python and stored in Redis before indexing.

Next up was Nikos Marinos from the EBI Literature Services team talking about their recent migration from Lucene to Solr. As he explained most of this was a straightforward task, with one wrinkle being the use of DIH Transformers where array data was used. Rafael Jimenez then talked about projects he has worked on using both Elasticsearch and Solr, and stressed the importance of adhering to open standards and re-use of software where possible – key strengths of open source of course. Michal Nowotka then talked about a proposed system to replace the current ChEMBL search using Solr and django-haystack (the latter allows one to use a variety of underlying search engines from Django). Finally, Nicola Buso talked about EBISearch, based on Lucene.

We then concluded with another hands-on session, more aimed at Elasticsearch this time. As you can probably tell we had been shown a huge variety of different search needs and solutions using a range of technologies over the two days and it was clear to me that the BioSolr project is only a small first step towards improving the software available – we have applied for further funding and we hope to have good news soon! Working with life science data, often at significant scale, has been fascinating.

Most of the presentations are now available for download. Thanks to all the presenters (especially those who travelled from abroad), the EBI for kindly hosting the event and in particular to Dr Sameer Velankar who has been the driving force behind this project.

The post Better search for life sciences at the BioSolr Workshop, day 2 – Elasticsearch & others appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2016/02/15/better-search-life-sciences-biosolr-workshop-day-2-elasticsearch-others/feed/ 0
Better search for life sciences at the BioSolr Workshop, day 1 – Apache Lucene/Solr http://www.flax.co.uk/blog/2016/02/10/better-search-life-sciences-biosolr-workshop-day-1-apache-lucenesolr/ http://www.flax.co.uk/blog/2016/02/10/better-search-life-sciences-biosolr-workshop-day-1-apache-lucenesolr/#respond Wed, 10 Feb 2016 10:26:00 +0000 http://www.flax.co.uk/?p=2973           Over the last 18 months we’ve been working closely with the European Bioinformatics Institute on a project to improve their use of open source search engines, funded by the BBSRC. The project was originally named … More

The post Better search for life sciences at the BioSolr Workshop, day 1 – Apache Lucene/Solr appeared first on Flax.

]]>
 

 

 

 

 

Over the last 18 months we’ve been working closely with the European Bioinformatics Institute on a project to improve their use of open source search engines, funded by the BBSRC. The project was originally named BioSolr but has since grown to encompass Elasticsearch. Last week we held a two-day workshop on the Wellcome Genome Campus near Cambridge to showcase our achievements and hear from others working in the same field, focused on Solr on the first day and Elasticsearch and other solutions on the second. Attendees included both bioinformaticians and search experts, as the project has very much been about collaboration and learning from each other.

The day started with a quick recap of the project from myself and Dr. Sameer Valenkar of the EBI. Eric Pugh, founder of Flax’s US partners Open Source Connections, followed with his Unofficial State of Solr, detailing the history of the project, recent innovations and what might happen in the future, including some very interesting new features allowing for parallel SQL queries. We then heard from Flax team members Tom Winch and Matt Pearce on how they have built faceting improvements, a new XJoin between Solr and external systems, researched federated search and developed ontology indexers (note that all of the software they’ve built is available as open source, and Tom has recently written extensively about XJoin).

After lunch we heard from Peter Meric of the NCBI (the US equivalent of the EBI) on a Solr-based system for searching gene data, to supplement the NCBI’s homegrown Entrez system. This is very much a filtered search rather than a text search and indexes around 330m records. He also talked about a High Availability prototype of a replacement for the very high traffic PubMed service built on Amazon Web Services. Each Solr, MongoDB or Zookeeper node ‘announces’ itself using a monitor service and then replicates data from a master node. Although it is not yet available as open source I think this project may be of great interest to the wider Solr community and I hope we hear more of it soon.

Next up was a brief talk by Dan Bolser of the EBI on an ‘old school’ scheme for sharding plant phenotype data – I’d seen part of this presentation before and it’s linked to our own ideas on federating search across bioinformatics data. Dan was followed by Lewis Geer of NCBI talking about the SEQR protein similarity search engine built on Solr. Although somewhat complex for us non-biologists to understand, this very clever system relies on experimental results to suggest which of the possible variants of a protein system are likely, and adds these to the Solr index – it reminded me of a similar approach we’ve used to store possible OCR errors when working with scanned newsprint. His team’s code is available. Dan Stainer of the Ensembl project was next discussing how his team are indexing tens of thousands of genomes from thousands of species, currently on a MySQL backend with a REST API and a lot of Perl. He discussed how they have been experimenting with Elasticsearch to index around 3.2bn items, creating a 782GB index which builds in around 5-6 hours, to provide new capabilities such as structured queries for their genome browser tools.

We then held an interactive hands-on session, covering subjects such as ‘getting started with Solr’ and exploring some of the code we’ve built such as XJoin, followed by a conference dinner in Hinxton Hall. It was clear that there is a huge range of use cases for search technology in the life sciences community and almost as many different ways to address them, and the after-dinner conversation was lively and highly interesting!

Most of the presentations are now available for download and we’ve also written about the second day of the event, where we shifted focus onto Elasticsearch and other technologies.

The post Better search for life sciences at the BioSolr Workshop, day 1 – Apache Lucene/Solr appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2016/02/10/better-search-life-sciences-biosolr-workshop-day-1-apache-lucenesolr/feed/ 0
XJoin for Solr, part 2: a click-through example http://www.flax.co.uk/blog/2016/01/29/xjoin-solr-part-2-click-example/ http://www.flax.co.uk/blog/2016/01/29/xjoin-solr-part-2-click-example/#respond Fri, 29 Jan 2016 09:39:00 +0000 http://www.flax.co.uk/?p=2941 In my last blog post, I demonstrated how to set up and configure Solr to use the new XJoin search components we’ve developed for the BioSolr project, using an example from an e-commerce setting. This time, I’ll show how to use … More

The post XJoin for Solr, part 2: a click-through example appeared first on Flax.

]]>
In my last blog post, I demonstrated how to set up and configure Solr to use the new XJoin search components we’ve developed for the BioSolr project, using an example from an e-commerce setting. This time, I’ll show how to use XJoin to make use of user click-through data to influence the score of products in searches.

I’ll step through things a bit quicker this time around and I’ll be using code from the last post so reading that first is highly recommended. I’ll assume that the prerequisites from last time have been installed and set up in the same directories.

The design

Suppose we have a web page for searching a collection of products, and when a user clicks on product listing in the result set (or perhaps, when they subsequently go on to buy that product – or both) we insert a record in an SQL database, storing the product id, the query terms they used, and an arbitrary weight value (which will depend on whether they merely clicked on a result, or if they went on to buy it, or some other behaviour such as mouse pointer tracking). We then want to use the click-through data stored in that database to boost products in searches that use those query terms again.

We could use the sum of the weights of all occurrences of a product id/query term combination as the product score boost, but then we might start to worry about a feedback process occurring. Alternatively, we might take the maximum or average weight across the occurrences. In the code below, we’ll use the maximum.

The advantage of this design over storing the click-through information in Solr is that you don’t have to update the Solr index every time there is user activity, which could become costly. An SQL database is much more suited to this task.

The external click-through API

Again, we’ll be using Python 3 (using the flask and sqlite3 modules) to implement the external API. I’ll be using this API to update the click-through database (by hand, for this example) as well as having Solr query it using XJoin. Here’s the code (partly based on code taken from here for caching the database connection in the Flask application context, and see here if you’re interested in more details about sqlite3’s support for full text search). Again, all the code written for this example is also available in the BioSolr GitHub repository:

from flask import Flask, request, g
import json
import sqlite3 as sql

# flask application context attribute for caching database connection
DB_APP_KEY = '_database'

# default weight for storing against queries
DEFAULT_WEIGHT = 1.0

app = Flask(__name__)

def get_db():
  """ Obtain a (cached) DB connection and return a cursor for it.
  """
  db = getattr(g, DB_APP_KEY, None)
  if db is None:
    db = sql.connect('click.db')
    setattr(g, DB_APP_KEY, db)
    c = db.cursor()
    c.execute("CREATE VIRTUAL TABLE IF NOT EXISTS click USING fts4 ("
                "id VARCHAR(256),"
                "q VARCHAR(256),"
                "weight FLOAT"
              ")")
    c.close()
  return db

@app.teardown_appcontext
def teardown_db(exception):
  db = getattr(g, DB_APP_KEY, None)
  if db is not None:
    db.close()

@app.route('/')
def main():
  return 'click-through API'

@app.route('/click/<path:id>', methods=["PUT"])
def click(id):
  # validate request
  if 'q' not in request.args:
    return 'Missing q parameter', 400
  q = request.args['q']
  try:
    w = float(request.args.get('weight', DEFAULT_WEIGHT))
  except ValueError:
    return 'Could not parse weight', 400

  # do the DB update
  db = get_db()
  try:
    c = db.cursor()
    c.execute("INSERT INTO click (id, q, weight) VALUES (?, ?, ?)", (id, q, w))
    db.commit()
    return 'OK'
  finally:
    c.close()

@app.route('/ids')
def ids():
  # validate request
  if 'q' not in request.args:
    return 'Missing q parameter', 400
  q = request.args['q']
  
  # do the DB lookup
  try:
    c = get_db().cursor()
    c.execute("SELECT id, MAX(weight) FROM click WHERE q MATCH ? GROUP BY id", (q, ))
    return json.dumps([{ 'id': id, 'weight': w } for id, w in c])
  finally:
    c.close()

if __name__ == "__main__":
  app.run(port=8001, debug=True)

This web API exposes two end-points. First we have PUT /click/[id] which is used when we want to update the SQL database after a user click. For the purposes of this demonstration, we’ll be hitting this end-point by hand using curl to avoid having to write a web UI. The other end-point, GET /ids?[query terms], is used by our XJoin component and returns a JSON-formatted array of id/weight objects where the query terms from the database match those given in the query string.

Java glue code

Now we just need the Java glue code that sits between the XJoin component and our external API. Here’s an implementation of XJoinResultsFactory that does what we need:

package uk.co.flax.examples.xjoin;

import java.io.IOException;
import java.net.URLEncoder;
import java.util.HashMap;
import java.util.Map;

import javax.json.JsonArray;
import javax.json.JsonObject;
import javax.json.JsonValue;

import org.apache.solr.common.params.SolrParams;
import org.apache.solr.common.util.NamedList;
import org.apache.solr.search.xjoin.XJoinResults;
import org.apache.solr.search.xjoin.XJoinResultsFactory;

public class ClickXJoinResultsFactory
implements XJoinResultsFactory {
  private String url;
  
  @Override
  @SuppressWarnings("rawtypes")
  public void init(NamedList args) {
    url = (String)args.get("url");
  }

  /**
   * Use 'click' REST API to fetch current click data. 
   */
  @Override
  public XJoinResults getResults(SolrParams params)
  throws IOException {
    String q = URLEncoder.encode(params.get("q"), "UTF-8");
    String apiUrl = url + "?q=" + q;
    try (HttpConnection http = new HttpConnection(apiUrl)) {
      JsonArray products = (JsonArray)http.getJson();
      return new ClickResults(products);
    }
  }
    
  public class ClickResults implements XJoinResults {
    private Map<String, Click> clickMap;
    
    public ClickResults(JsonArray products) {
      clickMap = new HashMap<>();
      for (JsonValue product : products) {
        JsonObject object = (JsonObject)product;
        String id = object.getString("id");
        double weight = object.getJsonNumber("weight").doubleValue();
        clickMap.put(id, new Click(id, weight));
      }
    }
    
    public int getCount() {
      return clickMap.size();
    }
    
    @Override
    public Iterable getJoinIds() {
      return clickMap.keySet();
    }

    @Override
    public Object getResult(String id) {
      return clickMap.get(id);
    }      
  }
  
  public class Click {
    
    private String id;
    private double weight;
    
    public Click(String id, double weight) {
      this.id = id;
      this.weight = weight;
    }
    
    public String getId() {
      return id;
    }
    
    public double getWeight() {
      return weight;
    } 
  }
}

Unlike the previous example, this time getResults() does depend on the SolrParams argument, so that the user’s query, q, is passed to the external API. Store this Java source in blog/java/uk/co/flax/examples/xjoin/ClickXJoinResultsFactory.java and compile into a JAR (again, we also need the HttpConnection class from the last blog post as well as javax.json-1.0.4.jar):

blog$ javac -sourcepath src/java -d bin -cp javax.json-1.0.4.jar:../lucene_solr_5_3/solr/dist/solr-solrj-5.3.2-SNAPSHOT.jar:../lucene_solr_5_3/solr/dist/solr-xjoin-5.3.2-SNAPSHOT.jar src/java/uk/co/flax/examples/xjoin/ClickXJoinResultsFactory.java
blog$ jar cvf click.jar -C bin .

Solr configuration

Starting with a fresh version of solrconfig.xml, insert these lines near the start to import the XJoin and user JARs (substitute /XXX with the full path to the parent of the blog directory):

<lib dir="${solr.install.dir:../../../..}/contrib/xjoin/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-xjoin-\d.*\.jar" />
<lib path="/XXX/blog/javax.json-1.0.4.jar" />
<lib path="/XXX/blog/click.jar" />

And our request handler configuration:

<queryParser name="xjoin" class="org.apache.solr.search.xjoin.XJoinQParserPlugin" />

<valueSourceParser name="weight" class="org.apache.solr.search.xjoin.XJoinValueSourceParser">
  <str name="attribute">weight</str>
  <double name="defaultValue">0.0</double>
</valueSourceParser>

<searchComponent name="x_click" class="org.apache.solr.search.xjoin.XJoinSearchComponent">
  <str name="factoryClass">uk.co.flax.examples.xjoin.ClickXJoinResultsFactory</str>
  <str name="joinField">id</str>
  <lst name="external">
    <str name="url">http://localhost:8001/ids</str>
  </lst>
</searchComponent>

<requestHandler name="/xjoin" class="solr.SearchHandler" startup="lazy">
  <lst name="defaults">
    <str name="wt">json</str>
    <str name="echoParams">none</str>
    <str name="defType">edismax</str>
    <str name="df">description</str>
    <str name="fl">*</str>

    <bool name="x_click">false</bool>
    <str name="x_click.results">count</str>
    <str name="x_click.fl">*</str>
  </lst>
  <arr name="first-components">
    <str>x_click</str>
  </arr>
  <arr name="last-components">
    <str>x_click</str>
  </arr>
</requestHandler>

Reload the Solr core (products) to get the new config in place.

Putting the pieces together

The following query will verify our Solr setup (remembering to escape curly brackets):

blog$ curl 'http://localhost:8983/solr/products/xjoin?qq=excel&q=$\{qq\}&fl=id,name,score&rows=4' | jq .

I’ve used Solr parameter substitution with the q/qq parameters which will simplify later queries (this has been in Solr since 5.1). This query returns:

{
  "responseHeader": {
    "status": 0,
    "QTime": 25
  },
  "response": {
    "numFound": 21,
    "start": 0,
    "maxScore": 2.9939778,
    "docs": [
      {
        "name": "individual software professor teaches excel and word",
        "id": "http://www.google.com/base/feeds/snippets/13017887935047670097",
        "score": 2.9939778
      },
      {
        "name": "individual software prm-xw3 professor teaches excel & word",
        "id": "http://www.google.com/base/feeds/snippets/7197668762339216420",
        "score": 2.9939778
      },
      {
        "name": "individual software prm-xw3 professor teaches excel & word",
        "id": "http://www.google.com/base/feeds/snippets/16702106469790828707",
        "score": 1.8712361
      },
      {
        "name": "individual software prm-xw3 professor teaches excel & word",
        "id": "http://www.google.com/base/feeds/snippets/9200068133591804002",
        "score": 1.8712361
      }
    ]
  }
}

Some repeat products in the data, but so far, so good. Next, get the click-through API running:

blog$ python3 click.py

And check it’s working (this should return [] whatever query is chosen because the click-through database is empty):

curl localhost:8001/ids?q=software | jq .

Now, let’s populate the click-through database by simulating user activity. Suppose, given the above product results, the user goes on to click through to the fourth product (or even buy it). Then, the UI would update the click web API to indicate this has happened. Let’s do this by hand, specifying the product id, the user’s query, and a weight score (here, I’ll use the value 3, supposing the user bought the product in the end):

curl -XPUT 'localhost:8001/click/http://www.google.com/base/feeds/snippets/9200068133591804002?q=excel&weight=3'

Now, we can check the output that XJoin will see when using the click-through API:

blog$ curl localhost:8001/ids?q=excel | jq .

giving:

[
  {
    "weight": 3,
    "id": "http://www.google.com/base/feeds/snippets/9200068133591804002"
  }
]

Using the bf edismax parameter and the weight function set up in solrconfig.xml to extract the weight value from the external results stored in the x_click XJoin search component, we can boost product scores when they appear in the click-through database for the user’s query:

blog$ curl 'http://localhost:8983/solr/products/xjoin?qq=excel&q=$\{qq\}&x_click=true&x_click.external.q=$\{qq\}&bf=weight(x_click)^4&fl=id,name,score&rows=4' | jq .

which gives:

{
  "responseHeader": {
    "status": 0,
    "QTime": 13
  },
  "response": {
    "numFound": 21,
    "start": 0,
    "maxScore": 3.2224145,
    "docs": [
      {
        "name": "individual software prm-xw3 professor teaches excel & word",
        "id": "http://www.google.com/base/feeds/snippets/9200068133591804002",
        "score": 3.2224145
      },
      {
        "name": "individual software professor teaches excel and word",
        "id": "http://www.google.com/base/feeds/snippets/13017887935047670097",
        "score": 2.4895983
      },
      {
        "name": "individual software prm-xw3 professor teaches excel & word",
        "id": "http://www.google.com/base/feeds/snippets/7197668762339216420",
        "score": 2.4895983
      },
      {
        "name": "individual software prm-xw3 professor teaches excel & word",
        "id": "http://www.google.com/base/feeds/snippets/16702106469790828707",
        "score": 1.5559989
      }
    ]
  },
  "x_click": {
    "count": 1,
    "external": [
      {
        "joinId": "http://www.google.com/base/feeds/snippets/9200068133591804002",
        "doc": {
          "id": "http://www.google.com/base/feeds/snippets/9200068133591804002",
          "weight": 3
        }
      }
    ]
  }
}

Lo and behold, the product the user clicked on now appears top of the Solr results for the that query. Have a play with the API, generate some more user activity and see how this effects subsequent queries. It will cope fine with multiple-word queries, for example, suppose a user searches for ‘games software’:

curl 'http://localhost:8983/solr/products/xjoin?qq=games+software&q=$\{qq\}&x_click=true&x_click.external.q=$\{qq\}&bf=weight(x_click)^4&fl=id,name,score&rows=4' | jq .

There being no relevant queries in the click-through database, this has the same results as for a query without the XJoin, and as we can see, the value of response.x_click.count is 0:

{
  "responseHeader": {
    "status": 0,
    "QTime": 15
  },
  "response": {
    "numFound": 1158,
    "start": 0,
    "maxScore": 0.91356516,
    "docs": [
      {
        "name": "encore software 10568 - encore hoyle puzzle & board games 2005 - complete product - puzzle game - 1 user - complete product - standard - pc",
        "id": "http://www.google.com/base/feeds/snippets/4998847858583359731",
        "score": 0.91356516
      },
      {
        "name": "encore software 11141 - fate sb cs by wild games",
        "id": "http://www.google.com/base/feeds/snippets/826668451451666270",
        "score": 0.8699497
      },
      {
        "name": "encore software 10027 - hoyle board games (win 98 me 2000 xp)",
        "id": "http://www.google.com/base/feeds/snippets/8664755713112971171",
        "score": 0.85982025
      },
      {
        "name": "encore software 11253 - brain food games: cranium collection 2006 sb cs by encore",
        "id": "http://www.google.com/base/feeds/snippets/15401280256033043239",
        "score": 0.78744644
      }
    ]
  },
  "x_click": {
    "count": 0,
    "external": []
  }
}

Now let’s simulate the same user clicking on the second product (with default weight):

blog$ curl -XPUT 'localhost:8001/click/http://www.google.com/base/feeds/snippets/826668451451666270?q=games+software'

Next, suppose another user then searches for just ‘games’:

blog$ curl 'http://localhost:8983/solr/products/xjoin?qq=games&q=$\{qq\}&x_click=true&x_click.external.q=$\{qq\}&bf=weight(x_click)^4&fl=id,name,score&rows=4' | jq .

In the results, we see the ‘wild games’ product boosted to the top:

{
  "responseHeader": {
    "status": 0,
    "QTime": 60
  },
  "response": {
    "numFound": 212,
    "start": 0,
    "maxScore": 1.3652229,
    "docs": [
      {
        "name": "encore software 11141 - fate sb cs by wild games",
        "id": "http://www.google.com/base/feeds/snippets/826668451451666270",
        "score": 1.3652229
      },
      {
        "name": "xbox 360: ddr universe",
        "id": "http://www.google.com/base/feeds/snippets/16659259513615352372",
        "score": 0.95894843
      },
      {
        "name": "south park chef's luv shack",
        "id": "http://www.google.com/base/feeds/snippets/11648097795915093399",
        "score": 0.95894843
      },
      {
        "name": "egames. inc casual games pack",
        "id": "http://www.google.com/base/feeds/snippets/16700933768709687512",
        "score": 0.89483213
      }
    ]
  },
  "x_click": {
    "count": 1,
    "external": [
      {
        "joinId": "http://www.google.com/base/feeds/snippets/826668451451666270",
        "doc": {
          "id": "http://www.google.com/base/feeds/snippets/826668451451666270",
          "weight": 1
        }
      }
    ]
  }
}

Extensions

Of course, this approach can be extended to add in more sophisticated weighting and boosting strategies, or include more data about the user activity than just a simple weight score, which could be used to augment the display of the product in the UI (for example, “ten customers in the UK bought this product in the last month”).

The XJoin patch was developed as part of the BioSolr project but it is not specific to bioinformatics and can be used in any situation where you want to use data from an external source to influence the results of a Solr search. (Other joins, including cross-core joins, are available – but you need XJoin if the data you are joining against is not in Solr.). We’ll be talking about XJoin and the other features we’ve developed for both Solr and Elasticsearch, including powerful ontology indexing, at a workshop at the European Bioinformatics Institute next week.

The post XJoin for Solr, part 2: a click-through example appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2016/01/29/xjoin-solr-part-2-click-example/feed/ 0
The fun and frustration of writing a plugin for Elasticsearch for ontology indexing http://www.flax.co.uk/blog/2016/01/27/fun-frustration-writing-plugin-elasticsearch-ontology-indexing/ http://www.flax.co.uk/blog/2016/01/27/fun-frustration-writing-plugin-elasticsearch-ontology-indexing/#comments Wed, 27 Jan 2016 10:15:11 +0000 http://www.flax.co.uk/?p=2954 As part of our work on the BioSolr project, I have been continuing to work on the various Elasticsearch ontology annotation plugins (note that even though the project started with a focus on Solr – thus the name – we … More

The post The fun and frustration of writing a plugin for Elasticsearch for ontology indexing appeared first on Flax.

]]>
As part of our work on the BioSolr project, I have been continuing to work on the various Elasticsearch ontology annotation plugins (note that even though the project started with a focus on Solr – thus the name – we have also been developing some features for Elasticsearch). These are now largely working, with some quirks which will be mentioned below (they may not even be quirks, but they seem non-intuitive to me, so deserve a mention). It’s been a slightly painful process, as you may infer from the use of italics below, and we hope this post will illustrate some of the differences between writing plugins for Solr and Elasticsearch.

It’s probably worth noting that at least some of this write-up is speculative. I’m not privy to the internals of Elasticsearch, and have been building the plugin through a combination of looking at the Elasticsearch source code (as advised by the documentation) and running the same integration test over and over again for each of the various versions, and checking what was returned in the search response. There is very little in the way of documentation, and the 1.x versions of Elasticsearch have almost no comments or Javadoc in the code. It has been interesting and fun, and not at all exasperating or frustrating.

The code

The plugin code can be broken down into three broad sections:

  • A core module, containing code shared between the Elasticsearch and Solr versions of the plugin. Anything in this module should be search engine agnostic, and is dedicated to accessing and pulling data from ontologies, either via the OLS service (provided by the European Bioinformatics Institute, our partners in the BioSolr project) or more generally OWL files, and returning a structure which can be used by the plugins.
  • The es-ontology-annotator-core module, which is shared between all versions of the plugin, and contains Elasticsearch-specific code to build the helper classes required to access the ontology data.
  • The es-ontology-annotator-esx.x modules, which are specific to the various versions of Elasticsearch. So far, there are five of these (one of the more challenging aspects of this work has been that the Elasticsearch mapper structure has been evolving through the versions, as has some of the internal infrastructure supporting them):
    • 1.3 – for ES 1.3
    • 1.4 – for ES 1.4
    • 1.5 – for ES 1.5 – 1.7
    • 2.0 – for ES 2.0
    • 2.1 – for ES 2.1.1
    • 2.2 – for ES 2.2

I haven’t tried the plugin with any versions of ES earlier than 1.3. There was a change to the internal mapping classes between 1.4 and 1.5 (UpdateInPlaceHashMap was removed and replaced with CopyOnWriteHashMap), presumably for a Very Good Reason. Versions since 1.5 seem to be forward compatible with later 1.x versions.

The quirks

All of the versions of the plugin work in the same way. You specify in your mapping that a particular field has the type “ontology”. There are various additional properties that can be set, depending on whether you’re using an OWL file or OLS as your ontology data source (specified in the README). When the data is indexed, any information in that field is assumed to be an IRI referring to an ontology record, and will be used to fetch as much data as required/possible for that ontology record. The data will then be added as sub-fields to the ontology fields.

The new data is not added to the _source field, which is the easy way of seeing what data is in a stored record. In order to retrieve the new data, you have two options:

  • Grab the mapping for your index, and look through it for the sub-fields of your annotation field. Use as many of these as you need to populate the fields property in your search request, making sure you name them fully (ie. annotation.uri, annotation.label, annotation.child_uris).
  • Add all of the fields to the fields property in your search request (ie. "fields": [ "*" ]).

What you cannot do is add “annotation.*” to your search request to get all of the annotation subfields. At this stage, this doesn’t work. I’m still working out whether this is possible or not.

How it works

All of the versions work in a broadly similar fashion: the OntologyMapper class extends AbstractFieldMapper (Elasticsearch 1.x) or FieldMapper (Elasticsearch 2.x). The Mapper classes all have two internal classes:

  • a TypeParser, which reads the mapper’s configuration from the mapping details (as initially specified by the user, and as also returned from the Mapper.toXContent method), and returns…
  • a Builder, which constructs the mappers for the known sub-fields and ultimately builds the Mapper class. The sub-field mappers are all for string fields, with mappers for URI fields having tokenisation disabled, while the other fields have it enabled. All are both indexed and stored.

The Mapper parses the content of the initial field (the IRI for the ontology record), and adds the sub-fields to the record, as part of the Mapper.parse method call (this is the most significant part of the Mapper code). There are at least two ways of doing this, and the Elasticsearch source code has both depending on which Mapper class you look at. There is no indication in the source why you would use one method over the other. This helps with clarity, especially when things aren’t working as they should.

What makes life more interesting for the OntologyMapper class is that not all of the sub-fields are known at start time. If the user wishes to index additional relationships between nodes (“participates in”, “has disease location”, etc.), these are generated on the fly, and the sub-fields need to be added to the mapping. Figuring out how to do this, and also how to make sure those fields are returned when the use requests the mapping for the index, has been a particular challenge.

The TypeParser is called more than once during the indexing process. My initial assumption was that once the mapping details had been read from the user’s specification, the parser was “fixed,” and so you had to keep track of the sub-field mappers yourself. This is not the case. As noted above, the TypeParser can also be fed from the Mapper’s toXContent method (which generates the mapping seen when you call the _mapping endpoint). Elasticsearch versions 1.x didn’t seem to care particularly what toXContent returned, so long as it could be parsed without throwing a NullPointerException, but Elasticsearch versions 2.x actually check that all of the mapping configuration has been dealt with. This actually makes life easier internally – after the mapper has processed a record, at least some of the dynamic field mappings are known, so you can build the sub-field mappers in the Builder rather than having to build them on the fly during the Mapper.parse process.

The other non-trivial Mapper methods are:

  • toXContent, as mentioned several times already. This generates the mapping output (ie. the definition of the field as seen when you look via the _mapping endpoint).
  • merge, which seems to do a compatibility check between an incoming instance of the mapper and the current instance. I’ve added some checks to this, but no significant code. Several of the implementations of this method in the Elasticsearch source code simply contain comments to the effect of “will return to this later”, so it seems I’m not the only person who doesn’t understand how merge works, or why it is called.
  • traverse (Elasticsearch 1.x) and iterator (Elasticsearch 2.x), which seem to do similar things – namely providing a means to iterate through the sub-field mappers. In Elasticsearch 1.x, the traverse method is explicitly called as part of the process to add the new (dynamic) mappers to the mapping, but this isn’t a requirement for Elasticsearch 2.x. Elasticsearch 1.x distinguished between ObjectMappers and FieldMappers, which doesn’t seem to be a distinction in Elasticsearch 2.x.

Comparisons with the Solr plugin

The Solr plugin works somewhat differently to the Elasticsearch one. The Solr plugin is implemented as an UpdateRequestProcessor, and adds new fields directly to the incoming record (it doesn’t add sub-fields). This makes the returned data less tidy, but also easier to handle, since all of the new fields have the same prefix and can therefore be handled directly. You don’t need to explicitly tell Solr to return the new fields – because they are all stored, they are all returned by default.

On the other hand, you still have to jump through some hoops to work out which fields are dynamically generated, if you need to do that (i.e. to add checkboxes to a form to search “has disease location” or other relationships) – you need to call Solr to retrieve the schema, and use that as the basis for working out which are the new fields. For Elasticsearch, you have to request the mapping for your index, and use that in a similar way.

Configuration in Solr requires modifying the solrconfig.xml, once the plugin JAR file is in place, but doesn’t require any changes to the schema. All of the Elasticsearch configuration happens in the mapping definition. This reflects the different ways of implementing the plugin for Solr. I don’t have a particular feeling for whether it would have been better to implement the Solr plugin as a new field type – I did investigate, and it seemed much harder to do this, but it might be worth re-visiting if there is time available.

The Solr plugin was much easier to write, simply because the documentation is better. The Solr wiki has a very useful base page for writing a new UpdateRequestProcessor, and the source code has plenty of comments and Javadoc (although it’s not perfect in this respect – SolrCoreAware has no documentation at all, has been present since Solr 1.3, and was a requirement for keeping track of the Ontology helper threads).

I will most likely update this post as I become aware of things I have done which are wrong, or any misinformation it contains. We’ll also be talking further about the BioSolr project at a workshop event on February 3rd/4th 2016. We welcome feedback and comments, of course – especially from the wider Elasticsearch developer community.

The post The fun and frustration of writing a plugin for Elasticsearch for ontology indexing appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2016/01/27/fun-frustration-writing-plugin-elasticsearch-ontology-indexing/feed/ 6
XJoin for Solr, part 1: filtering using price discount data http://www.flax.co.uk/blog/2016/01/25/xjoin-solr-part-1-filtering-using-price-discount-data/ http://www.flax.co.uk/blog/2016/01/25/xjoin-solr-part-1-filtering-using-price-discount-data/#comments Mon, 25 Jan 2016 10:04:28 +0000 http://www.flax.co.uk/?p=2928 In this blog post I want to introduce you to a new Apache Solr plugin component called XJoin. I’ll show how we can use this to solve a common problem in e-commerce – how to use price discount data, provided by an … More

The post XJoin for Solr, part 1: filtering using price discount data appeared first on Flax.

]]>
In this blog post I want to introduce you to a new Apache Solr plugin component called XJoin. I’ll show how we can use this to solve a common problem in e-commerce – how to use price discount data, provided by an external web API, to either filter the results of a product search or boost scores. A further post will show another example, using click-through data to influence the score of subsequent searches.

What is XJoin?

The XJoin component can be used when you want values from some source external to Solr to filter or influence the score of hits in your Solr result set.  It is currently available as a Solr patch on the XJoin JIRA ticket SOLR-7341, so to use it, you’ll need to check out a version of Apache Lucene/Solr using Subversion, then patch and build it (see below for details).

The XJoin patch was developed as part of the BioSolr project but it is not specific to bioinformatics and can be used in any situation where you want to use data from an external source to influence the results of a Solr search. (Other joins, including cross-core joins, are available – but you need XJoin if the data you are joining against is not in Solr.). We’ll be talking about XJoin and the other features we’ve developed for both Solr and Elasticsearch, including powerful ontology indexing, at a workshop at the European Bioinformatics Institute next week.

Patching SOLR

I’m going to be using Solr version 5.3 for this blog. If you’re following along, check out a clean copy using Subversion:

$ svn co https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_5_3

Download the XJoin patch (find the one corresponding to this version of Solr on the JIRA ticket) into the newly checked-out directory, and apply it:

lucene_solr_5_3$ svn patch SOLR-7341.patch-5_3

And then build Solr from the solr sub-directory:

lucene_solr_5_3/solr$ ant server

We should now be able to start the patched Solr server:

lucene_solr_5_3/solr$ bin/solr start

Indexing a sample product data set

I’ll be using a sample Google product feed, GoogleProducts.csv, which I got from here. Create a new directory called blog (mine has the same parent as my Solr check-out) and download the sample into it. It’s in CSV format, with columns for product id, name, description, manufacturer and price. Indexing this will be a piece of cake!

We’ll begin with a copy of the sample Solr config directory:

blog$ cp -r ../lucene_solr_5_3/solr/server/solr/configsets/basic_configs/conf .

Modify conf/schema.xml so that our Solr documents have fields corresponding to those in the CSV file:

<field name="id" type="string" indexed="true" /> 
<field name="name" type="text_en" indexed="true" />
<field name="description" type="text_en" indexed="true" />
<field name="manufacturer" type="string" indexed="true" />
<field name="price" type="float" indexed="true" />

Naturally, the product id will serve as the Solr unique key:

<uniqueKey>id</uniqueKey>

We can use the sample solrconfig.xml as is for now. Add a core called products using the Solr core admin UI (as you started a Solr server above, this should be available at   http://localhost:8983/solr/#/~cores). The values for instanceDir and dataDir will both be the full path of the blog directory.

I’ll be using Python to index the product data. The code is written for Python 3, and won’t work in Python 2.x because of character encoding issues in the csv module, but you can fix it by using a UTF8Recoder as described in the module documentation. Here’s my indexing script (note that all the code written for this example is also available in the BioSolr GitHub repository):

import sys
import csv
import json
import requests

def value(k, v):
    return k, v.strip() if k != 'price' else float(v.split()[0])

def read(path):
    with open(path, encoding='iso-8859-1') as f:
        reader = csv.DictReader(f)
        for doc in reader:
            yield dict(value(k, v) for k, v in doc.items()
                       if len(v.strip()) > 0)

def index(url, docs):
    print("Sending {0} documents to {1}".format(len(docs), url))
    data = json.dumps(docs)
    headers = { 'content-type': 'application/json' }
    r = requests.post(url, data=data, headers=headers)
    if r.status_code != 200:
      raise IOError("Bad SOLR update")

if __name__ == "__main__":
    if len(sys.argv) < 3:
        print("Usage: {0} <Solr update URL> <CSV file>".format(sys.argv[0]))
        sys.exit(1)

    docs = list(read(sys.argv[2]))
    index(sys.argv[1], docs)

The script tidies up the prices because they aren’t consistently formatted, converting them to float values. Save the script in index.py and use it to index the Google product data into Solr (let’s force commits, just to be sure):

blog$ python3 index.py http://localhost:8983/solr/products/update?commit=true GoogleProducts.csv

And, lo and behold, we can see our data in Solr using cURL (I like to pipe the output through jq to get nicely formatted JSON):

curl 'localhost:8983/solr/products/select?wt=json&q=*' | jq .

So, using Solr we’ve now built a full text product search in only a few minutes, with potentially all the add-ons Solr provides out of the box. However, suppose there is supplementary information about the products, available from an external source (which might not be under our control).

I will now demonstrate how to configure Solr so that during a product search, the external source is also queried (either with the same user query or something different) and the resulting external data used to influence the result set. Each external result is ‘joined’ against a Solr document via a ‘join field’ or ‘join id’, which doesn’t have to be the Solr unique id (in the examples below I use the product id and manufacturer as the join fields). To get an ‘inner join’ I will use the XJoinQParserPlugin to turn the external ids into a filter query, but it’s also possible to build boost queries or use the XJoinValueSourceParser to use external values in a boost function. You can see all this implemented below.

Product discount offers example

In the first of my examples, I’ll set up filtering and score boosting based on discount offers, the external source for which is going to be a web service, which I’m going to make available locally on the URL http://localhost:8000/offers.  Again, I’ll implement this in Python, using the popular Flask web server micro-framework and the module requests.  Install both of these using pip (I need sudo, but you might not):

blog$ sudo pip install flask requests

Creating the external source

Here’s my code for the product offers web API:

from flask import Flask
from index import read
import json
import random
import sys

app = Flask(__name__)

@app.route('/')
def main():
    return json.dumps({ 'info': 'product offers API' })

@app.route('/products')
def products():
    offer = lambda doc: {
                'id': doc['id'],
                'discountPct': random.randint(1, 80)
            }
    return json.dumps([offer(doc) for doc
                       in random.sample(app.docs, 64)])

@app.route('/manufacturers')
def manufacturer():
  manufacturers = set(doc['manufacturer'] for doc in app.docs
                      if 'manufacturer' in doc)
  deal = lambda m: {
             'manufacturer': m,
             'discountPct': random.randint(1, 10) * 5
         }
  return json.dumps([deal(m) for m
                     in random.sample(manufacturers, 3)])

if __name__ == "__main__":
  if len(sys.argv) < 2:
    print("Usage: {0} <CSV file>".format(sys.argv[0]))
    sys.exit(1)

  app.docs = list(read(sys.argv[1]))
  app.run(port=8000, debug=True)

The code generates discounts for a random selection of products and manufacturers. Save it to blog/offer.py and start the server, supplying the Google products CSV file on the command line:

blog$ python3 offer.py GoogleProducts.csv

Now, test it out using cURL (again, I like to pipe through jq to get nicely formatted JSON):

$ curl -s localhost:8000/products | jq .

You should see a list of objects, each with a product id and a discount percentage, something like:

[
  {
    "discountPct": 41,
    "id": "http://www.google.com/base/feeds/snippets/18100341066456401733"
  },
  {
    "discountPct": 63,
    "id": "http://www.google.com/base/feeds/snippets/16969493842479402672"
  },
  {
    "discountPct": 13,
    "id": "http://www.google.com/base/feeds/snippets/10357785197400989441"
  },
  {
    "discountPct": 35,
    "id": "http://www.google.com/base/feeds/snippets/2813321165033737171"
  },
  {
    "discountPct": 27,
    "id": "http://www.google.com/base/feeds/snippets/15203735208016659510"
  },
  ...
]

You get similar output if you use the /manufacturers endpoint:

$ curl -s localhost:8000/manufacturers | jq .

This time, we get a shorter list, of manufacturers each with a discount percentage, for example:

[
  {
    "discountPct": 15,
    "manufacturer": "freeverse software"
  },
  {
    "discountPct": 5,
    "manufacturer": "pinnacle systems"
  },
  {
    "discountPct": 50,
    "manufacturer": "destineer inc"
  }
]

Creating XJoin glue code

To bridge the gap between Solr and our external data source, XJoin requires some glue code, written in Java, to query the source and return the results. First, I’ll create a quick utility class to help with HTTP connections:

package uk.co.flax.examples.xjoin;

import java.io.IOException;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.URL;

import javax.json.Json;
import javax.json.JsonReader;
import javax.json.JsonStructure;

public class HttpConnection implements AutoCloseable {
  private HttpURLConnection http;
  
  public HttpConnection(String url) throws IOException {
    http = (HttpURLConnection)new URL(url).openConnection();
  }
  
  public JsonStructure getJson() throws IOException {
    http.setRequestMethod("GET");
    http.setRequestProperty("Accept", "application/json");
    try (InputStream in = http.getInputStream();
         JsonReader reader = Json.createReader(in)) {
      return reader.read();
    }
  }
  
  @Override
  public void close() {
    http.disconnect();
  }
}

Save this as blog/java/uk/co/flax/examples/xjoin/HttpConnection.java. The glue code we need is fairly simple, and can be written as a single class, implementing the XJoinResultsFactory interface:

package uk.co.flax.examples.xjoin;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import javax.json.JsonArray;
import javax.json.JsonObject;
import javax.json.JsonValue;

import org.apache.solr.common.params.SolrParams;
import org.apache.solr.common.util.NamedList;
import org.apache.solr.search.xjoin.XJoinResults;
import org.apache.solr.search.xjoin.XJoinResultsFactory;

public class OfferXJoinResultsFactory
implements XJoinResultsFactory {
  private String url;
  private String field;
  private String discountField;
  
  @Override
  @SuppressWarnings("rawtypes")
  public void init(NamedList args) {
    url = (String)args.get("url");
    field = (String)args.get("field");
    discountField = (String)args.get("discountField");
  }

  /**
   * Use 'offers' REST API to fetch current offer data. 
   */
  @Override
  public XJoinResults getResults(SolrParams params)
  throws IOException {
    try (HttpConnection http = new HttpConnection(url)) {
      JsonArray offers = (JsonArray)http.getJson();
      return new OfferResults(offers);
    }
  }
   
  /**
   * Results of the external search - methods like getXXX() are used
   * to expose the property XXX in the SOLR results.
   */
  public class OfferResults implements XJoinResults {
    private JsonArray offers;
    
    public OfferResults(JsonArray offers) {
      this.offers = offers;
    }
    
    public int getCount() {
      return offers.size();
    }
    
    @Override
    public Iterable getJoinIds() {
      List ids = new ArrayList<>();
      for (JsonValue offer : offers) {
        ids.add(((JsonObject)offer).getString(field));
      }
      return ids;
    }

    @Override
    public Object getResult(String joinIdStr) {
      for (JsonValue offer : offers) {
        String id = ((JsonObject)offer).getString(field);
        if (id.equals(joinIdStr)) {
          return new Offer(offer);
        }
      }
      return null;
    }
  }
  
  /**
   * A discount offer - methods like getXXX() are used to expose
   * properties that can be joined with each Solr result via the join
   * id field.
   */
  public class Offer {
    private JsonValue offer;
    
    public Offer(JsonValue offer) {
      this.offer = offer;
    }
    
    public double getDiscount() {
      return ((JsonObject)offer).getInt(discountField) * 0.01d;
    }
  }
}

Here, the init() method initialises the URL for the external API and the names of the values we want to pick out from the external data. The getResults() method connects to the external API – since in this example, the discounts do not depend on the user’s query, we don’t use the SolrParams argument at all. It returns an implementation of XJoinResults, which must be able to return a collection of join ids (so, the value of the join id field for each external result), and also be able to return an external result object given a join id. Together, the XJoinResults object and each external result object contain the results of the external search, exposed via getXXX() methods (which are mapped to properties called XXX) and (once everything is plumbed in) available to Solr for filtering, affected the scores of documents, or for inclusion in the results set.

Save the above as blog/java/uk/co/flax/examples/xjoin/OfferXJoinResultsFactory.java. You’ll also need javax.json-1.0.4.jar, which you can download from here if you don’t already have it – place it in the blog directory. Compile the two Java source files, and create a JAR to contain the resulting .class files:

blog$ mkdir bin
blog$ javac -sourcepath src/java -d bin -cp javax.json-1.0.4.jar:../lucene_solr_5_3/solr/dist/solr-solrj-5.3.2-SNAPSHOT.jar:../lucene_solr_5_3/solr/dist/solr-xjoin-5.3.2-SNAPSHOT.jar src/java/uk/co/flax/examples/xjoin/OfferXJoinResultsFactory.java
blog$ jar cvf offer.jar -C bin .

Configuring XJoin

So now – at last! – I’ll configure a Solr query handler that uses the XJoin Solr plugin components to add filters and boost queries based on the external data.

I’ll be working with blog/conf/solrconfig.xml now. The first thing to do is include the contrib JARs for XJoin and our glue code JAR (offer.jar) in <lib> directives near the top of the config file. To do that, add in the following snippet just under the <dataDir> directive:

<lib dir="${solr.install.dir:../../../..}/contrib/xjoin/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-xjoin-\d.*\.jar" />
<lib path="/XXX/blog/javax.json-1.0.4.jar" />
<lib path="/XXX/blog/offer.jar" />

Here, you need to substitute /XXX with the full path to the parent of the blog directory.  (We need to include javax.json-1.0.4.jar because it’s a dependency of our offer.jar.) Now for the request handler config – I’ll include everything we’re going to need even though it won’t all be used straightaway:

<queryParser name="xjoin" class="org.apache.solr.search.xjoin.XJoinQParserPlugin" />

<valueSourceParser name="discount" class="org.apache.solr.search.xjoin.XJoinValueSourceParser">
  <str name="attribute">discount</str>
  <double name="defaultValue">0.0</double>
</valueSourceParser>

<searchComponent name="x_product_offers" class="org.apache.solr.search.xjoin.XJoinSearchComponent">
  <str name="factoryClass">uk.co.flax.examples.xjoin.OfferXJoinResultsFactory</str>
  <str name="joinField">id</str>
  <lst name="external">
    <str name="url">http://localhost:8000/products</str>
    <str name="field">id</str>
    <str name="discountField">discountPct</str>
  </lst>
</searchComponent>

<searchComponent name="x_manufacturer_offers" class="org.apache.solr.search.xjoin.XJoinSearchComponent">
  <str name="factoryClass">uk.co.flax.examples.xjoin.OfferXJoinResultsFactory</str>
  <str name="joinField">manufacturer</str>
  <lst name="external">
    <str name="url">http://localhost:8000/manufacturers</str>
    <str name="field">manufacturer</str>
    <str name="discountField">discountPct</str>
  </lst>
</searchComponent>

<requestHandler name="/xjoin" class="solr.SearchHandler" startup="lazy">
  <lst name="defaults">
    <str name="wt">json</str>
    <str name="echoParams">all</str>
    <str name="defType">edismax</str>
    <str name="df">description</str>
    <str name="fl">*</str>

    <bool name="x_product_offers">false</bool>
    <str name="x_product_offers.results">count</str>
    <str name="x_product_offers.fl">*</str>

    <bool name="x_manufacturer_offers">false</bool>
    <str name="x_manufacturer_offers.results">count</str>
    <str name="x_manufacturer_offers.fl">*</str>
  </lst>
  <arr name="first-components">
    <str>x_product_offers</str>
    <str>x_manufacturer_offers</str>
  </arr>
  <arr name="last-components">
    <str>x_product_offers</str>
    <str>x_manufacturer_offers</str>
  </arr>
</requestHandler>

Insert this request handler config somewhere near the bottom of solrconfig.xml.

Using XJoin in a query

Let’s quickly get a query working, then I’ll explain what all the components that I’ve included do. Try this (remembering to escape curly brackets on the command line):

blog$ curl 'localhost:8983/solr/products/xjoin?q=*&x_product_offers=true&fq=\{!xjoin\}x_product_offers&fl=id,name&rows=4' | jq .

You should see output like this (I’ve edited responseHeader.params for clarity):

{
  "responseHeader": {
    "status": 0,
    "QTime": 22,
    "params": {
      "x_product_offers": "true", 
      "x_product_offers.results": "count",
      "x_product_offers.fl": "*", 
      "q": "*", 
      "fq": "{!xjoin}x_product_offers", 
      "fl": "id,name",
      "rows": "4"
    }
  },
  "response": {
    "numFound": 64,
    "start": 0,
    "docs": [
      {
        "name": "did0480p-m311 plasmon additional maintenance 24x7 - plasmon diamond technical support - consul",
        "id": "http://www.google.com/base/feeds/snippets/13522752516373728128"
      },
      {
        "name": "apple ilife '06 family pack",
        "id": "http://www.google.com/base/feeds/snippets/10939909441298262260"
      },
      {
        "name": "adobe cs3 web standard upsell",
        "id": "http://www.google.com/base/feeds/snippets/8042583218932085904"
      },
      {
        "name": "the richard friedman trio motown hits - *(for the tg-100)*",
        "id": "http://www.google.com/base/feeds/snippets/17853905518738313346"
      }
    ]
  },
  "x_product_offers": {
    "count": 64,
    "external": [
      {
        "joinId": "http://www.google.com/base/feeds/snippets/13522752516373728128",
        "doc": {
          "discount": 0.11
        }
      },
      {
        "joinId": "http://www.google.com/base/feeds/snippets/10939909441298262260",
        "doc": {
          "discount": 0.76
        }
      },
      {
        "joinId": "http://www.google.com/base/feeds/snippets/8042583218932085904",
        "doc": {
          "discount": 0.78
        }
      },
      {
        "joinId": "http://www.google.com/base/feeds/snippets/17853905518738313346",
        "doc": {
          "discount": 0.05
        }
      }
    ]
  }
}

Here you can see the usual Solr output with our product documents in the response.docs array. Notice the value of response.numFound is only 64 out of a possible 3226. Additionally, we have an extra section, response.x_product_offers, that gives us results from the external offers API – count tells us the total number of external results found, and there is an external result object with a join id matching each hit in the Solr results.

The query we made to get these results is a combination of the parameters in the request handler, and those in the URL’s query string – I’ve left the pertinent ones in responseHeader.paramsThe first parameter, x_product_offers=true, turns on the XJoin component that talks to the offers API, so that at query time, it will make a connection and retrieve external results (note that in this case, no parameters are passed to the external API – the following blog post will demonstrate this). The following two parameters control which fields are output from the external results – the .results option is a field list which controls the fields returned from the OfferResults object (that’s our implementation of XJoinResults – see the code above – there is one OfferResults object per external request and it acts as a collection of the returned external results). Then the .fl option is another field list which controls the fields returned for each external result object – these values can be used for filtering, boosting, and so on (for more on which, see below).

The parameters q=*, fl=id,name and rows=4 have their usual effects. The really interesting parameter is the filter query:

fq={!xjoin}x_product_offers

This uses Solr local parameters “short-form” syntax to reference the XJoinQParserPlugin that was set up in solrconfig.xml (it doesn’t take any initialisation parameters). This component uses the join ids from the referenced XJoin component to create a query that ORs together terms like join_field:join_id (one for each external result). It is based on the Solr built-in TermsQParserPlugin and supports the same method parameter (but this can usually be omitted). So, here, it makes a filter based on the join ids returned by the offers API – thus, only the products which have a current offer are returned.

Note that we could have used the same syntax in just the q parameter to achieve the same effect, but it’s more usual that a user full text query is specified in and a ‘join’ created using a filter query.

Using the XJoinValueSourceParser

The XJoinValueSourceParser component that we have configured in solrconfig.xml provides us with a function, discount, that we can use in a function query. I configured the component to extract the value of discount from external results, and we supply an XJoin component name as the argument – this is a reference to a set of external results.

This opens up lots of possibilities, for example, a search in which each product’s score is  boosted by a reciprocal function of the price including discount (so cheaper products, after discounting, are boosted higher):

blog$ curl 'localhost:8983/solr/products/xjoin?q=*&x_product_offers=true&bf=recip(product(price,sub(1,discount(x_product_offers))),1,100,100)^2&fl=id,price,score&rows=4' | jq .

which results in a response something like (again, with responseHeader.params edited for clarity):

{
  "responseHeader": {
    "status": 0,
    "QTime": 55,
    "params": {
       "x_product_offers": "true", 
       "x_product_offers.results": "count", 
       "x_product_offers.fl": "*", 
       "q": "*", 
       "bf": "recip(product(price,sub(1,discount(x_product_offers))),1,100,100)^2",
       "fl": "id,price,score",
       "rows": "4"
     }
   },
  "response": {
    "numFound": 3226,
    "start": 0,
    "maxScore": 1.3371909,
    "docs": [
      {
        "id": "http://www.google.com/base/feeds/snippets/549551716004314019",
        "price": 0.5,
        "score": 1.3371909
      },
      {
        "id": "http://www.google.com/base/feeds/snippets/13704505045182265069",
        "price": 8.49,
        "score": 1.325241
      },
      {
        "id": "http://www.google.com/base/feeds/snippets/17894887781222328015",
        "price": 9.9,
        "score": 1.3166784
      },
      {
        "id": "http://www.google.com/base/feeds/snippets/18427513736767114578",
        "price": 2.99,
        "score": 1.3156738
      }
    ]
  },
  "x_product_offers": {
    "count": 64,
    "external": [
      {
        "joinId": "http://www.google.com/base/feeds/snippets/13704505045182265069",
        "doc": {
          "discount": 0.78
        }
      },
      {
        "joinId": "http://www.google.com/base/feeds/snippets/17894887781222328015",
        "doc": {
          "discount": 0.71
        }
      }
    ]
  }
}

This time, because we haven’t applied on a filter based on the external join ids, we still have the full set of documents in the results set (3226 in total). Note that although there are 4 results in response.docs (as requested by rows=4), there are only 2 external results in x_product_offers.external – this is because only 2 of those 4 Solr documents have matching external results (in that they have the same value of join id in the join field, which in this case is the product id). In other words, only 2 out of the 4 products returned have discounts offered.

To achieve the price boost, instead of a filter query, we have a boost function:

bf=recip(product(price,sub(1,discount(x_product_offers))),1,100,100)^2

For each Solr document in the results set, the value of the expression discount(x_product_offers) is found by calling getDiscount() on the matching external result  in the x_product_offers XJoin search component. When there is no matching external result, the default value 0.0 is used, as configured for the value source parser in solrconfig.xml, which is equivalent to a 0% discount.

Of course, instead of the match-all q=* query, we can do an actual product search with our price boost, for example, q=apple. To be more sophisticated, we can also use the edismax parameter qf to query across both the name and description fields and weight them as we desire, for example, qf=name^4 description^2 or similar.

Joining on a field other than the unique id field

The join field does not have to correspond to the Solr unique id field. As seen above, the offers web API also returns discounts based on manufacturer (using the /manufacturers end-point). I configured another XJoin search component in solrconfig.xml called x_manufacturer_offers, the only differences from x_product_offers being the join field, which is now manufacturer, and the field which is taken from the external results to be the join value, which is of course the same, manufacturer.

So, now for example we can do a weighted query for “games software”, but restricting to products that have a manufacturer discount of at least 20%:

blog$ curl 'localhost:8983/solr/products/xjoin?q=software&qf=name^4+description^2&x_manufacturer_offers=true&fq=\{!frange+l=0.2\}discount(x_manufacturer_offers)&fl=*&rows=4' | jq .

See FunctionRangeQParserPlugin for details of the filter query used in this search. This gives something like (responseHeader.params omitted this time):

{
  "responseHeader": {
    "status": 0,
    "QTime": 4
  },
  "response": {
    "numFound": 25,
    "start": 0,
    "maxScore": 1.1224447,
    "docs": [
      {
        "price": 18.99,
        "name": "freeverse software 005 solace",
        "manufacturer": "freeverse software",
        "id": "http://www.google.com/base/feeds/snippets/7436299398173390476",
        "description": "in the noble tradition of axis & alliestm freeverse software unleashes an epic strategy board game that's so addicting it will leave you sleep deprived and socially inept! in the noble tradition of axis & alliestm freeverse software unleashes an ...",
        "_version_": 1524074329499762700
      },
      {
        "price": 18.99,
        "name": "freeverse software 005 solace",
        "manufacturer": "freeverse software",
        "id": "http://www.google.com/base/feeds/snippets/17001745805951209994",
        "description": "in the noble tradition of axis & alliestm freeverse software unleashes an epic strategy board game that's so addicting it will leave you sleep deprived and socially inept! in the noble tradition of axis & alliestm freeverse software unleashes an ...",
        "_version_": 1524074329499762700
      },
      {
        "price": 19.99,
        "name": "freeverse software 4001 northland",
        "manufacturer": "freeverse software",
        "id": "http://www.google.com/base/feeds/snippets/10584509515076384561",
        "description": "stand-alone real-time strategy game based on viking mythology description: stand-alone real-time strategy game based on viking mythology.game features:single player campaign with 8 missions including several sub missions. the exciting plots tells ...",
        "_version_": 1524074329559531500
      },
      {
        "price": 19.99,
        "name": "freeverse software 4001 northland",
        "manufacturer": "freeverse software",
        "id": "http://www.google.com/base/feeds/snippets/17283219592038470822",
        "description": "stand-alone real-time strategy game based on viking mythology description: stand-alone real-time strategy game based on viking mythology.game features:single player campaign with 8 missions including several sub missions. the exciting plots tells ...",
        "_version_": 1524074329681166300
      }
    ]
  },
  "x_manufacturer_offers": {
    "count": 3,
    "external": [
      {
        "joinId": "freeverse software",
        "doc": {
          "discount": 0.2
        }
      }
    ]
  }
}

In this case, there was only one manufacturer represented in the requested top 4 rows of the Solr results set.

Using two XJoin components in the same query

It’s worth noting that you can use more than one XJoin component in the same query. You can come up with more complicated examples, but this one shows how to query for all products that have a manufacturer discount as well as a product discount:

blog$ curl 'localhost:8983/solr/products/xjoin?q=*&x_product_offers=true&x_manufacturer_offers=true&fq=\{!xjoin\}x_product_offers&fq=\{!xjoin\}x_manufacturer_offers&fl=id,name,manufacturer&rows=4&wt=json' | jq .

You might have to try again a few times before you get a non-empty result set – here’s one I got:

{
  "responseHeader": {
    "status": 0,
    "QTime": 7
  },
  "response": {
    "numFound": 2,
    "start": 0,
    "docs": [
      {
        "name": "apple software m8789z/a webobjects 5.2",
        "manufacturer": "apple software",
        "id": "http://www.google.com/base/feeds/snippets/4776201646741876078"
      },
      {
        "name": "apple software m9301z/b soundtrack v1.2",
        "manufacturer": "apple software",
        "id": "http://www.google.com/base/feeds/snippets/16537637847870148950"
      }
    ]
  },
  "x_product_offers": {
    "count": 64,
    "external": [
      {
        "joinId": "http://www.google.com/base/feeds/snippets/4776201646741876078",
        "doc": {
          "discount": 0.59
        }
      },
      {
        "joinId": "http://www.google.com/base/feeds/snippets/16537637847870148950",
        "doc": {
          "discount": 0.22
        }
      }
    ]
  },
  "x_manufacturer_offers": {
    "count": 3,
    "external": [
      {
        "joinId": "apple software",
        "doc": {
          "discount": 0.3
        }
      }
    ]
  }
}

So you can see that there are two external results sections, one for product offers and one for manufacturer offers, and how the offers are matched to the products by the join ids (which is either the product id, or the manufacturer).

Next time…

In my next blog post, I’ll dive in to another demonstration of XJoin, in which I show how to use click-through data to influence the score of subsequent searches.

The post XJoin for Solr, part 1: filtering using price discount data appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2016/01/25/xjoin-solr-part-1-filtering-using-price-discount-data/feed/ 5
Elasticsearch vs. Solr: performance improvements http://www.flax.co.uk/blog/2015/12/18/elasticsearch-vs-solr-performance-improvements/ http://www.flax.co.uk/blog/2015/12/18/elasticsearch-vs-solr-performance-improvements/#respond Fri, 18 Dec 2015 17:04:20 +0000 http://www.flax.co.uk/?p=2873 I had been planning not to continue with these posts, but after Matt Weber pointed out the github pull requests (which to my embarrassment I’d not even noticed) he’d made to address some methodological flaws, another attempt was the least I could … More

The post Elasticsearch vs. Solr: performance improvements appeared first on Flax.

]]>
I had been planning not to continue with these posts, but after Matt Weber pointed out the github pull requests (which to my embarrassment I’d not even noticed) he’d made to address some methodological flaws, another attempt was the least I could do.

For Solr there was a slight reduction in mean search time, from 39ms (for my original, suboptimal query structure) to 34ms and median search time from 27ms to 25ms – see figure 1. Elasticsearch, on the other hand, showed a bigger improvement – see figure 2. Mean search time went down from 39ms to 27ms and median from 36ms to 24ms.

Comparing Solr with Elasticsearch using Matt’s changes, we get figure 3. The medians are close, at 25ms vs 24ms, but Elasticsearch has a significantly lower mean, at 27ms vs 34ms. The difference is even greater at the 99th percentile, at 57ms vs 126ms.

These results seem to confirm that Elasticsearch still has the edge over Solr. However, the QPS measurement (figure 4) now gives the advantage to Solr, at nearly 80 QPS, vs 60 QPS for Elasticsearch. The latter has actually decreased since making Matt’s changes. This last result is very unexpected, so I will be trying to reproduce both figures as soon as I have the chance (as well as Matt’s suggestion of trying the new-ish filter() operator in Solr).

Our sincere thanks to Matt for his valuable input.

matt-solr-orig-noload

Figure 1: Solr search times, no indexing, original vs Matt

matt-es-orig-noload

Figure 2: ES search times, no indexing, original vs Matt

matt-both

Figure 3: ES vs Solr search times, no indexing, Matt’s changes

matt-qps-both2

Figure 4: QPS, original vs Matt’s changes

The post Elasticsearch vs. Solr: performance improvements appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2015/12/18/elasticsearch-vs-solr-performance-improvements/feed/ 0
Elasticsearch London Meetup: Templates, easy log search & lead generation http://www.flax.co.uk/blog/2015/01/30/elasticsearch-london-meetup-templates-easy-log-search-lead-generation/ http://www.flax.co.uk/blog/2015/01/30/elasticsearch-london-meetup-templates-easy-log-search-lead-generation/#comments Fri, 30 Jan 2015 14:01:05 +0000 http://www.flax.co.uk/blog/?p=1363 After a long day at a Real Time Analytics event (of which more later) I dropped into the Elasticsearch London User Group, hosted by Red Badger and provided with a ridiculously huge amount of pizza (I have a theory that … More

The post Elasticsearch London Meetup: Templates, easy log search & lead generation appeared first on Flax.

]]>
After a long day at a Real Time Analytics event (of which more later) I dropped into the Elasticsearch London User Group, hosted by Red Badger and provided with a ridiculously huge amount of pizza (I have a theory that you’ll be able to spot an Elasticsearch developer in a few years by the size of their pizza-filled belly).

First up was Reuben Sutton of Artirix, describing how his team had moved away from the Elasticsearch Ruby libraries (which can be very slow, mainly due to the time taken to decode/encode data as JSON) towards the relatively new Mustache templating framework. This has allowed them to remove anything complex to do with search from their UI code, although they have had some trouble with Mustache’s support for partial templates. They found documentation was somewhat lacking, but they have contributed some improvements to this.

Next was David Laing of CityIndex describing Logsearch, a powerful way to spin up clusters of ELK (Elasticsearch+Logstash+Kibana) servers for log analysis. Based on the BOSH toolchain and open sourced, this allows CityIndex to create clusters in minutes for handling large amounts of data (they are currently processing 50GB of logs every day). David showed how the system is resilient to server failure and will automatically ‘resurrect’ failed nodes, and interestingly how this enables them to use Amazon spot pricing at around a tenth of the cost of the more stable AWS offerings. I asked how this powerful system might be used in the general case of Elasticsearch cluster management but David said it is targetted at log processing – but of course according to some everything will soon be a log anyway!

The last talk was by Alex Mitchell and Francois Bouet of Growth Intelligence who provide lead generation services. They explained how they have used Elasticsearch at several points in their data flow – as a data store for the web pages they crawl (storing these in both raw and processed form using multi-fields), for feature generation using the term vector API and to encode simple business rules for particular clients – as well as to power the search features of their website, of course.

A short Q&A with some of the Elasticsearch team followed: we heard that the new Shield security plugin has had some third-party testing (the details of which I suggested are published if possible) and a preview of what might appear in the 2.0 release – further improvements to the aggregrations features including derivatives and anomaly detection sound very useful. A swift drink and natter about the world of search with Mark Harwood and it was time to get the train home. Thanks to all the speakers and of course Yann for organising as ever – see you next time!

The post Elasticsearch London Meetup: Templates, easy log search & lead generation appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2015/01/30/elasticsearch-london-meetup-templates-easy-log-search-lead-generation/feed/ 1