Principles of Solr application design – part 2 of 2

We’ve been working internally on a document encapsulating how we build (and recommend others should build) search applications based on Apache Solr, probably the most popular open source search engine library. As an early Christmas present we’re releasing these as a two part series – if you have any feedback we’d welcome comments! Here’s the second part, you can also read the first part.

8. Have enough RAM

The single biggest performance bottleneck in most search installations is lack of RAM. Search is an I/O-intensive process, and the more that disk reads can be cached in memory, the better performance will be. As a rough guideline, your available RAM should be at least 50% the total size of your Solr index files. For demanding applications, up to 100% of the index size may be necessary.

I/O caching is incremental rather than immediate, and some minutes of searches under load may be required to warm them. Don’t expect high performance until the caches are thoroughly warmed up.

An increasingly popular alternative is to use solid state disks (SSDs) instead of traditional hard disks. These are hundreds of times faster, and mean that cold searches should be reasonably fast. They also reduce the amount of RAM required to perhaps as little as 10% of the index size (although as always, this will require testing for the application in question).

9. Use a dedicated machine or VM

Don’t share your Solr servers with any other demanding processes such as SQL databases. For dependable performance, Solr should not have to compete with other processes for resources. VMs are an effective way of ring-fencing resources.

10. Use MMapDirectory and 64-bit systems

By default, Solr on 64-bit systems will open indexes with Lucene’s MMapDirectory, which memory-maps files rather opening them for read/write/seek. Don’t change this! MMapDirectory allows for the most effective use of resources, in particular RAM (which as already described is a crucial resource for search performance).

11. Tune the Solr caches

The OS disk cache improves performance at the low level. At the higher level, Solr has a number of built-in caches which are stored in the JVM heap, and which can improve performance still further. These include the filter cache, the field value cache, the query result cache and the document cache. The filter cache is probably the most important to tune if you are using filtered queries extensively or faceting with the enum method – each entry in the filter cache takes up ( number of docs on shard / 8 ) bytes of space, so if you’ve got a cache limit of 4,000 then you’ll require (numDocs * 500) bytes to hold all of them. However, tuning all of these caches has the potential to improve performance.

To tune the caches, you should allow Solr to run for a while with real or simulated search activity. Then go to the Plugin/Stats page in the admin web interface. The first important number in the cache statistics is ‘hitratio’. This should ideally be as close to 1.0 as possible, indicating that most lookups are being serviced by the cache. Then, ‘evictions’ indicates how many items have been removed from the cache due to limited space. This should ideally be as close to zero as possible, or at least much smaller than ‘lookups’.

If ‘evictions’ is high and ‘hitratio’ low, you should increase the maximum cache size in solrconfig.xml. It is impossible to say what a good starting point for a specific application is, but we often pick 4000.

If the cache is performing well, it may be worth reducing the maximum size and re-testing. The purpose of the maximum size is to prevent the cache growing without limit and filling the JVM heap, which links to point 12 below.

See here more information on Solr caches.

12. Minimise JVM heap space

Once you have tuned your Solr caches, try to reduce the maximum JVM heap (set with -Xmx) to a reasonably small size – big enough to hold the caches and all the other data required for searching and indexing, but not much bigger. There is a graphical depiction of the JVM heap in the Solr admin dashboard which allows a quick overview for rough tuning. For a better picture, it may be worth using a tool like JConsole to monitor the heap as the application is used.

The reason to reduce the heap size is to free RAM for the OS disk cache, as described in point 8.

Garbage collection (GC) can be a problem if the heap size is large. See here for information on GC tuning in Solr and other performance issues.

13. Handle multiple languages with multiple fields

Some search applications need to be able to support documents of different languages within the same index. This may conflict with the use of stemming, stopwords and synonyms to improve search accuracy. Furthermore, languages like Japanese are not tokenised by Solr in the same way as European languages, due to different conventions on word boundaries. One effective method for supporting mutiple languages in an index with per-language term processing is outlined as follows. Note that this depends on knowing in advance what language a section of text is in.

First, create a variant of each text field in the index schema for each language to be supported. The schema.xml supplied with Solr has example fieldtypes for a wide range of languages which may be adapted as necessary. For example:

˂field name="content_en" type="text_en" indexed="true" stored="true"/ ˃
˂field name="content_fr" type="text_fr" indexed="true" stored="true"/ ˃
˂field name="content_jp" type="text_jp" indexed="true" stored="true"/ ˃

Note the use of language codes to distinguish the names of the fields and fieldtypes. Then, when indexing each document, send each section of text to the appropriate field. E.g., if the document is entirely in English, send the whole thing to content_en. If it has sections in English, French and Japanese, send them to content_en, content_fr and content_jp respectively. This ensures that text is tokenised and normalised appropriately for its language.

Finally for searching, use the eDisMax query parser, and include all the language fields in the qf parameter (and pf, if using). E.g., in solrconfig.xml:

˂requestHandler name="/search" class="solr.SearchHandler"˃
˂lst name="defaults"˃
˂str name="qf"˃content_en content_fr content_jp˂/str˃
˂str name="pf"˃content_en content_fr content_jp˂/str˃
...

When a search is executed with this handler, subqueries will be generated for each language with the appropriate term processing, and searched against each language text field. This approach should give the best precision and recall in a multi-language application.

Tags: , , , , ,

Posted in Reference, Technical

December 17th, 2013

No Comments »

Principles of Solr application design – part 1 of 2

We’ve been working internally on a document encapsulating how we build (and recommend others should build) search applications based on Apache Solr, probably the most popular open source search engine library. As an early Christmas present we’re releasing these as a two part series – if you have any feedback we’d welcome comments! So without further ado here’s the first part:

1. Use the latest release of Solr

Unless there are compelling reasons not to, such as reliance on a discontinued feature (which is rare), it is best to use the latest release of Solr, downloaded from http://lucene.apache.org/solr/ . Every minor release in the 4.x series has brought both functional and performance enhancements, and revision releases have fixed known bugs. Since the API (as a rule) remains backwards compatible, the potential gains in performance and utility should outweight the minor inconvenience of the upgrade.

2. Use SolrCloud for scaling and robustness

Before the Solr 4 release, support for sharding (distributing a single search over many Solr instances) and replication (for robustness and scaling search load) involved a significant amount of manual configuration and development. The introduction of SolrCloud means that sharding and replication are now built into the core product, and can be used with simple configuration and no extra coding.

For trivial applications, SolrCloud may not be required, but it is the simplest way to build in robustness and scalability. There’s more about SolrCloud here.

3. Don’t expose the Solr API

Although Solr is not inherently insecure, neither is it designed to be exposed to end-users (and emphatically not to the internet at large). Anyone with access to the root Solr endpoint would be able to delete indexes, modify or insert items at will. Restricting access to search handlers (e.g. /solr/select) avoids this possibility, but is nonetheless a bad idea since it may allow users to construct arbitrary queries which could degrade performance or provide access to unauthorised data. Furthermore, there remains the slim possibility of security holes in the Solr API.

For these reasons, any external access to search should be through a proxy interface which is restricted to the functionality required by the application. Access to the Solr API should be restricted by network design and/or firewalls. This applies equally to AJAX UIs, which should talk to Solr via an intermediary web application rather than directly.

The intermediary code should perform at least some basic validation of parameters before sending to Solr, for example checking their type and ensuring that query strings are under a certain length (depending on the search interface). This allows attempts at compromising the system to be detected at an early stage and blocked.

4. Don’t use third-party Solr client libraries

The problem with third-party client libraries is that they create a tight coupling between the application and Solr. The Solr XML and JSON APIs are simple, and a wide range of client libraries for these formats are readily available for most programming languages. Third-party libraries are an unnecessary additional dependency and a potential source of bugs and unexpected behaviour. Another risk is that development may be discontinued for various reasons, meaning that future Solr features are not easily accessible.

The one exception to this rule is the SolrJ Java client library, since it is part of the general Solr release and is therefore fully compliant with and tested against the corresponding version of Solr.

5. Specify interfaces

All interfaces between components in the application must be agreed between sys ops and developers before development is started. Interfaces should be treated as contracts which software components adhere to. Early documentation of interfaces will reduce the risk of unexpected dependencies leading to problems in deployment.

As far as possible, interfaces should be RESTful web APIs and use standard formats such as JSON and XML. This creates loose coupling between components and also makes it easy to test functionality from the command line or a browser.

6. Put apps live early, on isolated systems

Development should be iterative, with short development cycles (no more than a few weeks). Code should be tested and deployed at the end of each cycle. By using isolated systems, fake data and/or limiting access to authorised testers, functionality and performance may be tested as soon as possible on a ‘live’ system, avoiding the risk of unexpected problems if deployment is postponed until the end of the development cycle.

7. Do realistic performance tests early and often

Except for very small indexes, search performance is often unpredictable, particularly under load. To ensure that performance meets requirements, testing a full index under load with realistic queries should be scheduled as early as possible in development. If you don’t have the data available to create a full index, simulate it (e.g. using freely available text such as Wikipedia).

As new functions, e.g. facets, are added performance characteristics may change significantly, so it is important that performance tests are part of every development cycle. JMeter is a popular tool for load testing; alternatively test scripts could be easily written in a language like Python.

More to come next week!

Tags: , , , , ,

Posted in Reference, Technical

December 11th, 2013

No Comments »

Introducing Luwak, a library for high-performance stored queries

A few weeks ago we spoke in Dublin at Lucene Revolution 2013 on our work in the media monitoring sector for various clients including Gorkana and Australian Associated Press. These organisations handle a huge number (sometimes hundreds of thousands) of news articles every day and need to apply tens of thousands of stored expressions to each one, which would be extremely inefficient if done with standard search engine libraries. We’ve developed a much more efficient way to achieve the same result, by pre-filtering the expressions before they’re even applied: effectively we index the expressions and use the news article itself as a query, which led to the presentation title ‘Turning Search Upside Down’.

We’re pleased to announce the core of this process, a Java library we’ve called Luwak, is now available as open source software for your own projects. Here’s how you might use it:

Monitor monitor = new Monitor(new TermFilteredPresearcher()); /* Create a new monitor */

MonitorQuery mq = new MonitorQuery("query1", new TermQuery(new Term(textfield, "test")));
monitor.update(mq); /* Create and load a stored query with a single term */

InputDocument doc = InputDocument.builder("doc1")
.addField(textfield, document, WHITESPACE)
.build(); /* Load a document (which could be a news article) */

DocumentMatches matches = monitor.match(doc); /* Retrieve which queries it matches */

The library is based on our own fork of the Apache Lucene library (as Lucene doesn’t yet have a couple of features we need, although we expect these to end up in a release version of Lucene very soon). Our own tests have produced speeds of up to 70,000 stored queries applied to an article in around a second on modest hardware. Do let us know any feedback you have on Luwak – we think it may be useful for various monitoring and classification tasks where high throughput is necessary.

Tags: , , , , ,

Posted in Technical

December 6th, 2013

8 Comments »

Search Solutions 2013, a review

Yesterday was the always interesting Search Solutions one day conference held by the BCS IRSG in London, a mix of talks on different aspects of search. The first presentation was by Behshad Behzadi of Google on Conversational Search, where he showed a speech-capable search interface that allowed a ‘conversation’ with the search engine – context being preserved – so the query “where are Italian restaurants in Chelsea” followed by “no I prefer Chinese” would correctly return results about Chinese restaurants. The demo was impressive and we can expect to see more of this kind of technology as smartphone adoption rises. Wim Nijmeijer of Coveo followed with details of how their own custom connectors to a multitude of repositories could enable Complex enterprise search delivered in a day. This of course assumes that no complex mapping of fields or schemas from the source to the search engine index is necessary, which I suspect it often is – I’m not alone in being slightly suspicious of the supposed timescale. Nikolaos Nanas from Thessaly in Greece then presented on Adaptive Information Filtering: from theory to practise which I found particularly interesting as it described filtering documents against a user’s interest with the latter modelled by an adaptive, weighted network – he showed the Noowit personalised magazine application as an example. With over 1000 features per user and no language specific requirements this is a powerful idea.

After a short break we continued with a talk by Henning Rode on CV Search at TextKernel. He described a simple yet powerful UI for searching CVs (resumes) with autosuggest and automatic field recognition (type in “Jav” and the system suggests “Java” and knows this is a programming language or skill). He is also working on systems to autogenerate queries from job vacancies using heuristics. We’ve worked in the recruitment space ourselves so it was interesting to hear about their approach, although the technical detail was light. Following Henning was Dermot Frost talking about Information Preservation and Access at the Digital Repository of Ireland and their use of open source technology including Solr and Blacklight to build a search engine with a huge variety of content types, file formats and metadata standards across the items they are trying to digitally preserve. Currently this is a relatively small collection of data but they are planning to scale up over the next few years: this talk reminded me a little of last year’s by Emma Bayne of the UK’s National Archive.

After lunch we began a session named Understanding the User, beginning with Filip Radlinski of Microsoft Research. He discussed Sensitive Online Search Evaluation (with arXiv.org as a test collection) and how interleaved results is a powerful technique for avoiding bias. Next was Mounia Lalmas of Yahoo! Labs on what makes An Engaging Click (although unfortunately I had to pop out for a short while so I missed most of what I am sure was a fascinating talk!). Mags Hanley was next on Understanding users search intent with examples drawn from her work at TimeOut – the three main lessons being to know the content in context, the time of year and the users’ mental model in context. Interestingly she showed how the most popular facets used differed across TimeOut’s various international sites – in Paris the top facet was perhaps unsurprisingly ‘cuisine’, in London it was ‘date’.

After another short break we continued with Helen Lippell’s talk on Enterprise Search – how to triage problems quickly and prescribe the right medicine – her five main points being analyze user needs, fix broken content, focus on quick wins in the search UI, make sure you are able to tweak the search engine itself in a documentable fashion and remember the importance of people and process. Her last point ‘if search is a political football, get an outsider perspective’ is of course something we would agree with! Next was Peter Wallqvist of Ravn Systems on Universal Search and Social Networking where he focussed on how to allow users to interact directly with enterprise content items by tagging, sharing and commenting – so as to derive a ‘knowledge graph’ showing how people are connected by their relationships to content. We’ve built systems in the past that have allowed users to tag items in the search result screen itself so we can agree on the value of this approach. Our last presenter with Kristian Norling of Findwise on Reflections on the 2013 Enterprise Search Survey – some more positive news this year, with budgets for search increasing and 79% of respondents indicating that finding information is of high importance for their organisation. Although most respondents still have less than one full time staff member working on search, Kristian made the very good point that recruiting just one extra person would thus give them a competitive advantage. Perhaps as he says we’ve now reached a tipping point for the adoption of properly funded enterprise search regarded as an ongoing journey rather than a ‘fire and forget’ project.

The day finished with a ‘fishbowl’ session, during which there was a lot of discussion of how to foster links between the academic IR community and industry, then the BCS IRSG AGM and finally a drinks reception – thanks to all the organisers for a very interesting and enlightening day and we look forward to next year!

Lucene Revolution 2013, Dublin: day 2

A slow start to the day, possibly due to the aftereffects of the conference party the night before, but the stadium was still buzzing. I went to Rafal Kuć’s talk on SolrCloud which is becoming the standard way to build scalable Solr installations (we have two projects underway that use it). The shard splitting features in recent releases of Solr were interesting – previously one would either have to re-index the whole collection to a new set of shards, or more often over-allocate the number of shards to cope with a future increase in size, this method allows you to split an existing shard into two.

As our own talk was looming (and we needed to practise) I missed the next session unfortunately, although I hear from colleagues that the talk on High Performance JSON Search and Relational Faceted Browsing was good. We then broke for lunch during which we had a chance to test an idea Upayavira had come up with in the pub the night before: whether leeks are suitable for juggling, given that none of us had brought any proper equipment! They did work, but only briefly – luckily the stadium staff were very good natured about sweeping up the remains afterwards.

Our talk on Turning Search Upside Down: Using Lucene for Very Fast Stored Queries was next, during which I was ably assisted by Alan Woodward who has done the majority of the work during some recent projects for media monitoring companies. We’re hoping to release an open source library, Luwak, based on this work very soon – watch this space!

UPDATE: The video of our talk is now available and so is Luwak!

After an interesting talk next by Adrien Grand on What’s in a Lucene Index (unfortunately as this overran a little, we missed the closing remarks) it was time to say our goodbyes and head home. Thanks to all the Lucidworks team for organising a fascinating and friendly event – all of our team found it interesting and it was great to catch up with friends old and new. See you next time!

Tags: , , , , , ,

Posted in Uncategorized

November 8th, 2013

4 Comments »

Lucene Revolution 2013, Dublin: day 1

Four of the Flax team are in Dublin this week for Lucene Revolution, almost certainly the largest event centred on open source search and specifically Lucene. There are probably a couple of hundred Lucene enthusiasts here and the event is being held at the Aviva Stadium on Landsdowne Road: look out the windows and you can see the pitch! Here are some personal reflections: a number of the talks I attended today have a connection to our own work in media monitoring which we’re talking about tomorrow.

Doug Turnbull’s Test Driven Relevancy was interesting, discussing OSC’s Quepid tool that allows content owners and search experts to work together to tweak and tune Solr’s options to present the right results for a query. I wondered whether this tool might eventually be used to develop a Learning to Rank option for Solr, as Lucene 4 now supports a pluggable scoring model.

I enjoyed Real-Time Inverted Search in the Cloud Using Lucene and Storm during which Joshua Conlin told us about running hundreds of thousands of stored queries in a distrubuted architecture. Storm in particular sounds worth investigating further. There is currently no attempt to reduce or ‘prune’ the set of queries before applying them: Joshua quoted speeds of 4000 queries/sec across their cluster of 8 instances: impressive numbers, but our own monitoring applications are working at 20 times that speed by working out which queries not to apply.

I broke out at this point to catch up with some contacts, including the redoubtable Iain Fletcher of Search Technologies – always a pleasure. After a sandwich lunch I went along to hear Andrzej Bialecki of Lucidworks talk about Sidecar Indexes, a method for allowing rapid updates to Lucene fields. This reminded me of our own experiments in this area using Lucene’s pluggable codecs.

Next was more from the Opensource Connections team, as John Berryman talked about their work to update a patent search application that uses a very old search syntax, BRS. This sounds very much the work we’ve done to translate one search engine syntax into another for various media monitoring companies – so far we can handle dtSearch and we’re currently finishing off support for HP/Autonomy Verity’s VQL (PDF).

This latter issue has got me thinking that perhaps it might be possible to collaboratively develop an open source search engine query language – various parsers could be developed to turn other search syntaxes into this language, and search engines like Lucene (or anything else) could then be extended to implement support for it. This would potentially allow much easier migration between search engine technologies. I’m discussing the concept with various folks at the event this week so do please get in touch if you are interested!

Back tomorrow with a further update on this exciting conference – tonight we’re all off to the Temple Bar area of Dublin for food and drink, generously provided by Lucidworks who should also be thanked for organising the Revolution.

Tags: , , , , , ,

Posted in Technical, events

November 6th, 2013

3 Comments »

G-Cloud and open file formats, a cautionary tale

We’re lucky enough to have our services available on the G-Cloud, a new initiative by the UK Government’s Cabinet Office with the aim of breaking the sometimes monopolistic practices of ‘big IT’ when supplying government clients. We’ve recently had a couple of contracts procured via the G-Cloud iii framework and one of the requirements is to report whenever a client is invoiced. This is done via a website called Management Information Systems Online (MISO).

Part of the process is to input various mysterious Product Codes, and to find out what these were I downloaded a file from the MISO website. I use the Firefox browser and OpenOffice so I had assumed that opening this file would be a relatively simple process…perhaps unwisely.

Firstly, due to some quirk of the website and/or browser the file arrives with no file extension. I’m assuming it’s some kind of Microsoft Office document so I try renaming it to .xls as an Excel spreadsheet, and open it in OpenOffice Calc. This doesn’t work, as I end up with a load of XML in the spreadsheet cells. As it’s XML I wonder if it’s a newer, XML-powered Office format, so rename to .xlsx, but no, it seems that doesn’t work either. Opening up the file in a text editor shows it’s some kind of XML with Microsoft schemas abounding. At this point I tried contacting the MISO technical support department but they weren’t able to help.

A quick Google and I’ve discovered that the file is probably SpreadsheetML, a file format used before 2007 when Microsoft finally went the whole hog and embraced (well, forced everyone else to embrace) their own XML-based standard for Office documents. The latter format is something OpenOffice can easily read, so I try renaming the file as .xml and importing it. OpenOffice now tells me "OpenOffice.org requires a Java runtime environment (JRE) to perform this task. The selected JRE is defective."

This is now taking far too long. After some more research I discover what this actually means is OpenOffice needs a version of Java 6 (now discouraged by Oracle). I have to register for an Oracle account to even download it. Finally, Open Office is able to read the file and I can now fill in the original form.

If anything this process proves that central government has a long way to go towards adopting open standards and using plain, widely adopted file formats. The G-Cloud framework is a great step forward – but some of the details still need some work.

Solr and the changing landscape of search

This morning I was told about the launch of a new US-based search company, Heliosearch, founded by the creator of Apache Solr, Yonik Seeley. It seems the landscape of open source search and in particular Solr is changing again – Heliosearch are planning their own ‘certified’ distribution of Solr plus a raft of support, consulting and services. In the meantime, the company Yonik co-founded (and our partners) LucidWorks are recently launched an ‘App Store’ for search, the Solr Marketplace, offering add-ons to the core engine from both themselves and others.

What we’re seeing here is the further growth of an ecosystem based around what has almost become the default choice for new and migrating search applications. Some clients will want a packaged distribution of Solr, some will be happy to download the source from Apache, some will need help getting started and some will just need help when things get complicated, or support for a running application. We’ve seen all of these requirements and more in the last year.

Next week the largest conference on open source search, Lucene Revolution is held in Dublin, and four of the Flax team are attending. Do let us know if you’d like to meet up – I don’t think there’s going to be a lack of things to talk about!

Tags: , , , ,

Posted in News, events

October 29th, 2013

No Comments »

Finding the elephant in the room: open source search & Hadoop grow closer together

I’ve been lucky enough to attend two talks on Hadoop in the last few weeks which has made me take a closer look at this technology. In case you didn’t know, Hadoop is an Apache top level open source project comprising a framework for distributed computing and storage, originally created by Doug Cutting (also the creator of Apache Lucene) while at Yahoo! in 2005. Distributed computing is carried out using MapReduce (roughly speaking, the ‘map’ bit involves splitting a processing task up into chunks and distributing these among various processing nodes, the ‘reduce’ bit brings all the results together again) and the storage uses the Hadoop Distributed File System (HDFS). There are other parts of Hadoop including a database (HBase), data warehouse with SQL-like language (Hive), scripting language (Pig) and more.

Those I’ve spoken to who have attempted to build applications on Hadoop have said that it’s very much a kit of parts rather than an integrated platform, so not that easy to get started with – which has led to the emergence of various vendors providing ‘curated’ distributions and support, much as Lucidworks does for Apache Lucene/Solr. Cloudera, Hortonworks, and MapR are just some of the best-known of these vendors. With everyone jumping on the BigData bandwagon these days some of these vendors have attracted significant interest and funding.

As you might expect full-text search is often required for these distributed systems and there have been various attempts to bring Hadoop and search closer together. Hortonworks support integration with Elasticsearch, although this currently appears to mean that you can use Hive or Pig to move data from Hadoop on or off a separate Elasticsearch cluster, rather than the search engine running on the cluster itself. Cloudera’s integration of Hadoop with Solr appears to be tighter, with Solr storing its indexes on HDFS directly (perhaps not surprising considering Lucene/Solr committer Mark Miller, who is responsible for most recent SolrCloud development, works for Cloudera). Cloudera even has its own data conditioning framework Flume (yes, it seems we need yet another data conditioning/pipelining solution!) and allows for distributed indexing. MapR have partnered with LucidWorks and integrated LucidWorks Search into their distribution. All these vendors are heavy contributors to Hadoop of course and most also contribute to Lucene/Solr or Elasticsearch.

Since Hadoop has been linked with search from the beginning one can hope that these integration efforts will continue – applications that require distributed search are becoming increasingly common and Hadoop, despite its nature as a kit of parts requiring assembly, is a good foundation to build on.

Elasticsearch meetup – Duedil, Hadoop and more

I visited the London Elasticsearch User Groupsmeetup last night for the first time, in the rather splendid HQ of Skills Matter just down from Old Street – the venue had a great buzz. The first speaker was Chris Simpson from Duedil who provide UK company information gleaned from Companies House and other sources. He told us about using Elasticsearch to provide faceted search (including some great clickable bar graphs for numerical range facets) and how they bulk index around 9 million company records in about an hour, using Elasticsearch’s alias features to swap in new indexes once they’re ready – so there is no impact on search performance while indexing. He mentioned a common problem with search engines, which is there is no easy way to be sure how much hardware you’ll need until you ‘know your data and know your hosts’.

Next up was Chris Harris from Hortonworks, who provide a packaged and supported Apache Hadoop distribution. He explained how Hadoop can be used for capturing huge numbers of transactions (these could be interactions with an e-commerce website for example) and for storing them in a distributed database on low-cost hardware. The Hive ‘SQL-like’ language can then be used to extract the data and send it directly to Elasticsearch, or indeed to run queries on Elasticsearch and send the results back to Hadoop as a table. Similar processes can be run with the Pig scripting language. There followed some interesting discussions about the future of Hadoop, where search engines such as Elasticsearch may run directly on Hadoop nodes, working with the data locally. It will be interesting to compare this with the approach taken by Cloudera who are talking on Hadoop & Solr this Thursday at our own Meetup in Cambridge.

Clinton Gormley from Elasticsearch finished up with a Q&A, during which he talked about the new Phrase Suggesters based on Lucene’s new Finite State Machines, and gave hints about when the long awaited 1.0 release of Elasticsearch will appear – apparently early 2014 is now likely.

Thanks to all the speakers and to Elasticsearch for the very welcome beer and pizza – this certainly won’t be our last visit to this user group on what is an increasingly adopted open source search engine.