monitoring – Flax http://www.flax.co.uk The Open Source Search Specialists Thu, 10 Oct 2019 09:03:26 +0000 en-GB hourly 1 https://wordpress.org/?v=4.9.8 ECIR 2017 Industry Day, our book & a demo of live TV factchecking http://www.flax.co.uk/blog/2017/04/24/ecir-2017-industry-day-book-demo-live-tv-factchecking/ http://www.flax.co.uk/blog/2017/04/24/ecir-2017-industry-day-book-demo-live-tv-factchecking/#respond Mon, 24 Apr 2017 13:45:44 +0000 http://www.flax.co.uk/?p=3463 I visited Aberdeen before Easter to speak at Industry Day, a part of the European Conference on Information Retrieval. Following a reception at Aberdeen’s Town House (a wonderful building) hosted by the Lord Provost I spent an evening with various … More

The post ECIR 2017 Industry Day, our book & a demo of live TV factchecking appeared first on Flax.

]]>
I visited Aberdeen before Easter to speak at Industry Day, a part of the European Conference on Information Retrieval. Following a reception at Aberdeen’s Town House (a wonderful building) hosted by the Lord Provost I spent an evening with various information retrieval luminaries including Professor Udo Kruschwitz of the University of Essex. We had a chance to discuss the book we’re co-authoring (draft title ‘Searching the Enterprise’, designed as a review of the subject for those considering a PhD or those in business wanting to know the current state of the art – it should be out later this year) and I also caught up with our associate Tony Russell-Rose of UXLabs.

Industry Day started with a talk by Peter Mika of Norwegian media group Schibsted on modelling user behaviour for delivering personalised news. It was interesting to hear his views on Facebook and the recent controversy about their removal of a photo posted by a Schibsted group newspaper, and how this might be a reason Schibsted carry out their own internal developments rather than relying on the algorithms used by much larger companies. Edgar Meij was up next talking about search at Bloomberg (which we’ve been involved in) and it was interesting to hear that they might be contributing some of their alerting infrastructure back to Apache Lucene/Solr. James McMinn of startup Scoop Analytics followed, talking about real time news monitoring. They have built a prototype system based on PostgresSQL rather than a search engine, indexing around half a billion tweets, that allows one to spot breaking news much earlier than the main news outlets might report it.

The next session started with Michaela Regneri of OTTO on Newsleak.io, a project in collaboration with Der Speigel “producing a piece of software that allows to quickly and intuitively explore large amounts of textual data”. She stressed how important it is to have a common view of what is ‘good’ performance in collaborative projects like this. Richard Boulton (who worked at Flax many years ago) was next in his role as Head of Software Engineering at the Government Digital Service, talking about the ambitious project to create a taxonomy for all UK government content. So far, his team have managed to create an alpha version of this for educational content – not that they don’t have the time or resources in-house to tag content, so must therefore work with the relevant departments to do so. They have created various software tools to help including an automatic topic tagger using Latent Dirichlet Allocation – which given this is the GDS, is of course open source and available.

Unfortunately I missed a session after this due to a phone call, but managed to catch some of Elizabeth Daly of IBM talking about automatic claim detection using the Watson framework. Using Wikipedia as a source, this can identify statements that support a particular claim for an argument and tag them as ‘pro’ or ‘con’. This topic led neatly on to Will Moy of Full Fact who we have been working with recently, in a ‘sandwich’ session with myself. Will talked about how Full Fact has been working for many years to develop neutral, un-biased factchecking tools and services and I then spoke about the hackday we ran recently for FullFact and particularly about our Luwak library and how it can be used to spot known claims by politicians in streaming news. Will then surprised me and impressed the audience by showing a prototype service that watches several UK television channels in real time, extracts the subtitles and checks them against a list of previously factchecked claims – using the Luwak backend we built at the hackday. Yes, that’s live factchecking of television news, very exciting!

Thanks to Professor Kruschwitz and Tony Russell-Rose for putting together the agenda and inviting both me and Will to speak – it was great to be able to talk about the exciting work we’re doing with Full Fact and to hear about the other projects.

The post ECIR 2017 Industry Day, our book & a demo of live TV factchecking appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2017/04/24/ecir-2017-industry-day-book-demo-live-tv-factchecking/feed/ 0
Talks: Replacing Autonomy IDOL with Solr, Elasticsearch for e-commerce & relevancy tuning http://www.flax.co.uk/blog/2015/11/04/talks-replacing-autonomy-idol-with-solr-elasticsearch-for-e-commerce-relevancy-tuning/ http://www.flax.co.uk/blog/2015/11/04/talks-replacing-autonomy-idol-with-solr-elasticsearch-for-e-commerce-relevancy-tuning/#respond Wed, 04 Nov 2015 11:48:33 +0000 http://www.flax.co.uk/?p=2769 I’ll be speaking at several events over the next few weeks, in the UK and abroad. On the 19th of November I’ll be at the FIBEP World Media Intelligence Congress in Vienna, to talk about how we helped our client … More

The post Talks: Replacing Autonomy IDOL with Solr, Elasticsearch for e-commerce & relevancy tuning appeared first on Flax.

]]>
I’ll be speaking at several events over the next few weeks, in the UK and abroad. On the 19th of November I’ll be at the FIBEP World Media Intelligence Congress in Vienna, to talk about how we helped our client Infomedia migrate from a closed-source search engine (Autonomy IDOL and Verity) to a new platform based on Apache Lucene/Solr and our own Luwak stored search library. Infomedia are Denmark’s leading provider of media monitoring and analysis and wanted to future-proof their search platform: we’ll talk about open source makes this possible and how we implemented stored search, handled highly complex queries and how the new platform is scalable and flexible.

On the 25th I’ll be presenting at the London Elasticsearch Usergroup with our client Westcoast, who we have been helping with an Elasticsearch implementation. Westcoast are a B2B supplier of electronics and white goods with yearly revenues of over £1billion, and we’ve helped them implement a powerful new search engine for their website. E-commerce is one sector where good search is an essential part of driving revenue.

Next, on the 26th I’ll be talking one of my favourite events of the year, the British Computer Society Information Retrieval Specialist Group’s Search Solutions, on how we might improve how search engine relevance is tested. I’ll suggest a more formal process of test-based relevance tuning and show some useful tools. Our client NLA media access are also talking about the new Clipshare platform we built on Apache Lucene/Solr.

Do let me know if you’re attending and would like to chat – I’ll also be publishing slides and more information about the projects above soon.

The post Talks: Replacing Autonomy IDOL with Solr, Elasticsearch for e-commerce & relevancy tuning appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2015/11/04/talks-replacing-autonomy-idol-with-solr-elasticsearch-for-e-commerce-relevancy-tuning/feed/ 0
Elasticsearch Percolator & Luwak: a performance comparison of streamed search implementations http://www.flax.co.uk/blog/2015/07/27/a-performance-comparison-of-streamed-search-implementations/ http://www.flax.co.uk/blog/2015/07/27/a-performance-comparison-of-streamed-search-implementations/#comments Mon, 27 Jul 2015 07:04:25 +0000 http://www.flax.co.uk/blog/?p=1531 Most search applications work by indexing a relatively stable collection of documents and then allowing users to perform ad-hoc searches to retrieve relevant documents. However, in some cases it is useful to turn this model on its head, and match … More

The post Elasticsearch Percolator & Luwak: a performance comparison of streamed search implementations appeared first on Flax.

]]>
Most search applications work by indexing a relatively stable collection of documents and then allowing users to perform ad-hoc searches to retrieve relevant documents. However, in some cases it is useful to turn this model on its head, and match individual documents against a collection of saved queries. I shall refer to this model as “streamed search”.

One example of streamed search is in media monitoring. The monitoring agency’s client’s interests are represented by stored queries. Incoming documents are matched against the stored queries, and hits are returned for further processing before being reported to the client. Another example is financial news monitoring, to predict share price movements.

In both these examples, queries may be extremely complex (in order to improve the accuracy of hits). There may be hundreds of thousands of stored queries, and documents to be matched may be incoming at a rate of hundreds or thousands per second. Not surprisingly, streamed search can be a demanding task, and the computing resources required to support it a significant expense. There is therefore a need for the software to be as performant and efficient as possible.

The two leading open source streamed search implementations are Elasticsearch Percolator, and Luwak. Both depend on the Lucene search engine. As the developers of Luwak, we have an interest in how its performance compares with Percolator. We therefore carried out some preliminary testing.

Ideally, we would have used real media monitoring queries and documents. However, these are typically protected by copyright, and the queries represent a fundamental asset of monitoring companies. In order to make the tests distributable, we chose to use freely dowloadable documents from Wikipedia, and to generate random queries. These queries were much simpler in structure than the often deeply nested queries from real applications, but we believe that they still provide a useful comparison.

The tests were carried out on an Amazon EC2 r3.large VM running Ubuntu. We wrote a Python script to download, parse and store random Wikipedia articles, and another to generate random queries from the text. The query generator was designed to be somewhat “realistic”, in that each query should match more than zero documents. For Elasticsearch, we wrote scripts to index queries into the Percolator and then run documents through it. Since Luwak has a Java API (rather than Elasticsearch’s RESTful API), we wrote a minimal Java app to do the same.

10,000 documents were downloaded from Wikipedia, and 100,000 queries generated for each test. We generated four types of query:

  • Boolean with 10 required terms and 2 excluded terms
  • Boolean with 100 required terms and 20 excluded terms
  • 20 required wildcard terms, with a prefix of 4 characters
  • 2-term phrase query with slop of 5

We ran the tests independently, giving Luwak and Elasticsearch a JVM heap size of 8GB, and doing an initial pre-run in order to warm the OS cache (this did not actually have a noticable effect). For sanity, we checked that each document matched the same queries in both Luwak and Percolator.

The results are shown in the graphs below, where the y-axis represents average documents processed per second.

Results 1

Results 2

Luwak was consistently faster than Percolator, ranging from a factor of 6 (for the phrase query type) to 40 (for the large Boolean queries).

The reason for this is almost certainly due to Luwak’s presearcher. When a query is added to Luwak, the library generates terms to index the query. For each incoming document, a secondary query is constructed and run against the query index, which returns a subset of the entire query set. Each of these is then run against the document in order to generate the final results. The effect of this is to reduce the number of primary queries which have to be executed against the document, often by a considerable factor (at a relatively small cost of executing the secondary query). Percolator does not have this feature, and by default matches every primary query against every document (it would be possible, but not straightforward, for an application to implement a presearch phase in Percolator). Supporting this analysis, when the Luwak presearcher was disabled its performance dropped to about the same level as Percolator.

These results must be treated with a degree of caution, for several reasons. As already explained, the queries used were randomly generated, and far simpler in structure than typical hand-crafted monitoring queries. Furthermore, the tests were single-threaded and single-sharded, whereas a multithreaded, multi-shard, distributed architecture would be typical for a real-world system. Finally, Elasticsearch Percolator is a service providing a high-level, RESTful API, while Luwak is much lower level, and would require significantly more application-level code to be implemented for a real installation.

However, since both Luwak and Percolator use the same underlying search technology, it is reasonable to conclude that the Luwak presearcher can give it a considerable performance advantage over Percolator.

If you are already using Percolator, should you change? If performance is not a problem now and is unlikely to become a problem, then the effort required is unlikely to be worth it. Luwak is not a drop-in replacement for Percolator. However, if you are planning a data-intensive streaming search system, it would be worth comparing the two. Luwak works well with existing high-performance distributed computing frameworks, which would enable applications using it to scale to very large query sets and document streams.

Our test scripts are available here. We would welcome any attempts to replicate, extend or contest our results.

The post Elasticsearch Percolator & Luwak: a performance comparison of streamed search implementations appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2015/07/27/a-performance-comparison-of-streamed-search-implementations/feed/ 2
A review of Stephen Arnold’s CyberOSINT & Next Generation Information Access http://www.flax.co.uk/blog/2015/02/17/a-review-of-stephen-arnolds-cyberosint-next-generation-information-access/ http://www.flax.co.uk/blog/2015/02/17/a-review-of-stephen-arnolds-cyberosint-next-generation-information-access/#respond Tue, 17 Feb 2015 11:25:26 +0000 http://www.flax.co.uk/blog/?p=1388 Stephen Arnold, whose blog I enjoy due to its unabashed cynicism about overenthusiastic marketing of search technology, was kind enough to send me a copy of his recent report on CyberOSINT & Next Generation Information Access (NGIA), the latter being … More

The post A review of Stephen Arnold’s CyberOSINT & Next Generation Information Access appeared first on Flax.

]]>
Stephen Arnold, whose blog I enjoy due to its unabashed cynicism about overenthusiastic marketing of search technology, was kind enough to send me a copy of his recent report on CyberOSINT & Next Generation Information Access (NGIA), the latter being a term he has recently coined. OSINT itself refers to intelligence gathered from open, publically available sources, not anything to do with software licenses – so yes, this is all about the NSA, CIA and others, who as you might expect are keen on anything that can filter out the interesting from the noise. Let’s leave the definition (and the moral questionability) of ‘publically available’ aside for now – even if you disagree with its motives, this is a use case which can inform anyone with search requirements of the state of the art and what the future holds.

The report starts off with a foreword by Robert David Steele, who has had a varied and interesting career and lately has become a cheerleader for the other kind of open source – software – as a foundation for intelligence gathering. His view is that the tools used by the intelligence agencies ‘are also not good enough’ and ‘We have a very long way to go’. Although he writes that ‘the systems described in this volume have something to offer’ he later concludes that ‘This monograph is a starting point for those who might wish to demand a “full spectrum” solution, one that is 100% open source, and thus affordable, interoperable, and scalable.’ So for those of us in the open source sector, we could consider Arnold’s report as a good indicator of what to shoot for, a snapshot of the state of the art in search.

Arnold then starts the report with some explanation of the NGIA concept. This is largely a list of the common failings of traditional search platforms (basic keyword search, oft-confusing syntax, separate silos of information, lack of multimedia features and personalization) and how they might be addressed (natural language search, automatic querying, federated search, analytics). I am unconvinced this is as big a step as Arnold suggests though: it seems rather to imply that all past search systems were badly set up and configured and somehow a NGIA system will magically pull everything together for you and tell you the answer to questions you hadn’t even asked yet.

Disappointingly the exemplar chosen in the next chapter is Autonomy IDOL: regular readers will not be surprised by my feelings about this technology. Arnold suggests the creation of the Autonomy software was influenced by cracking World War II codes, rock music and artificial intelligence, which is in my mind adding egg to an already very eggy pudding, and not in step with what I know about the background of Cambridge Neurodynamics (Autonomy’s progenitor, created very soon after – and across the corridor from – Muscat, another Cambridge Bayesian search technology firm where Flax’s founders cut their teeth on search). In particular, Autonomy’s Kenjin tool – which automatically suggested related documents – is identified as a NGIA feature, although at the time I remember it being reminiscent of features we had built a year earlier at Muscat – we even applied for a patent. Arnold does note that ‘[Autonomy founder, Mike] Lynch and his colleagues clamped down on information about the inner workings of its smart software.’ and ‘The Autonomy approach locks down the IDOL components.’ – this was a magic black box of course, with a magically increasing price tag as well. The price tag rose to ridiculous dimensions (even after an equally ridiculous writedown) when Hewlett Packard bought the company.

The report continues with analysis of various other potential NGIA contenders, including Google-funded timeline analysis specialists Recorded Future and BAE Detica – interestingly one of the search specialists from this British company has now gone on to work at Elasticsearch.

The report concludes with a look at the future, correctly identifying advanced analytics as one key future trend. However this conclusion also echoes the foreword, with ‘The cost of proprietary licensing, maintenance, and training is now killing the marketplace. Open source alternatives will emerge, and among these may be a 900 pound gorilla that is free, interoperable and scalable.’. Although I have my issues with some of the examples chosen, the report will be very useful I’m sure to those in the intelligence sector, who like many are still looking for search that works.

The post A review of Stephen Arnold’s CyberOSINT & Next Generation Information Access appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2015/02/17/a-review-of-stephen-arnolds-cyberosint-next-generation-information-access/feed/ 0
Searching for opportunities in Real-Time Analytics http://www.flax.co.uk/blog/2015/02/02/searching-for-opportunities-in-real-time-analytics/ http://www.flax.co.uk/blog/2015/02/02/searching-for-opportunities-in-real-time-analytics/#respond Mon, 02 Feb 2015 17:18:22 +0000 http://www.flax.co.uk/blog/?p=1374 I spent a day last week at a new event from UNICOM, a conference on Real-Time Analytics. Mike Ferguson chaired the event and was kind enough to spend time with me over lunch exploring how search software might fit into … More

The post Searching for opportunities in Real-Time Analytics appeared first on Flax.

]]>
I spent a day last week at a new event from UNICOM, a conference on Real-Time Analytics. Mike Ferguson chaired the event and was kind enough to spend time with me over lunch exploring how search software might fit into the mix, something that has been on my mind since hearing about the Unified Log concept a few weeks ago.

Real-Time Analytics is a field where sometimes vast amounts of data in motion is gathered, filtered, cleaned and analysed to trigger various actions to benefit a business: building on earlier capabilities in Business Intelligence, the endgame is a business that adapts automatically to changing conditions in real-time – for example, automating the purchasing of extra stock based on changing behaviour of customers. The analysis part of this chain is driven by complex models, often based on sets of training data. Complex Event Processing or CEP is an older term for this kind of process (if you’re already suffering from buzzword overflow, Martin Kleppman has put some of these terms in context for those more familiar with web paradigms). Tools mentioned included Amazon Kinesis and from the Apache stable Cassandra, Hadoop, Kafka, Yarn, Storm and Spark. I particularly enjoyed Michael Cutler‘s presentation on Tumra’s Spark-based system.

One of the central problems identified was due to the rapid growth of data (including from the fabled Internet of Things) it will shortly be impossible to store every data point produced – so we must somehow sort the wheat from the chaff. Options for the analysis part include SQL-like query languages and more complex machine learning algorithms. I found myself wondering if search technology, using a set of stored queries, could be used somehow to reduce the flow of this continuous stream of data, using something like this prototype implementation based on Apache Samza. One could use this approach to transform unstructured data (say, a stream of text-based customer comments) into more structured data for later timeline analysis, split streams of events into several parts for separate processing or just to watch for sets of particularly interesting and complex events. Although search platforms such as Elasticsearch are already being integrated into the various Real-Time Analytics frameworks, these seem to be being used for offline processing rather than acting directly on the stream itself.

One potential advantage is that it might be a lot easier for analysts to generate a stored search than to learn SQL or the complexities of machine learning – just spend some time with a collection of past events and refine your search terms, facets and filters until your results are useful, and save the query you have generated.

This was a very interesting introduction to a relatively new field and thanks to UNICOM for the invitation. We’re going to continue to explore the possibilities!

The post Searching for opportunities in Real-Time Analytics appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2015/02/02/searching-for-opportunities-in-real-time-analytics/feed/ 0
Out and about in January and February http://www.flax.co.uk/blog/2015/01/27/out-and-about-in-january-and-february/ http://www.flax.co.uk/blog/2015/01/27/out-and-about-in-january-and-february/#respond Tue, 27 Jan 2015 11:08:39 +0000 http://www.flax.co.uk/blog/?p=1360 We’re speaking at a couple of events soon: if you’re in London and interested in Apache Lucene/Solr we’re also planning another London User Group Meetup soon. Firstly my colleague Alan Woodward is speaking with Martin Kleppman at FOSDEM in Brussels … More

The post Out and about in January and February appeared first on Flax.

]]>
We’re speaking at a couple of events soon: if you’re in London and interested in Apache Lucene/Solr we’re also planning another London User Group Meetup soon.

Firstly my colleague Alan Woodward is speaking with Martin Kleppman at FOSDEM in Brussels (31st January-1st February) on Searching over streams with Luwak and Apache Samza – about some fascinating work they’ve been doing to combine the powerful ‘reverse search’ facilities of our Luwak library with Apache Samza‘s distributed, stream-based processing. We’re hoping this means we can scale Luwak beyond its current limits (although those limits are pretty accomodating, as we know of systems where a million or so stored searches are applied to a million incoming messages every day). If you’re interested in open source search the Devroom they’re speaking in has lots of other great talks planned.

Next I’m talking about the wider applications of this kind of reverse search in the area of media monitoring, and how open source software in general can help you turn your organisation’s infrastructure upside down, at the Intrateam conference event in Copenhagen from February 24th-26th. Scroll down to find my talk at 11.35 am on Thursday 26th.

If you’d like to meet us at either of these events do get in touch.

The post Out and about in January and February appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2015/01/27/out-and-about-in-january-and-february/feed/ 0
A new Meetup for Lucene & Solr http://www.flax.co.uk/blog/2014/12/01/a-new-meetup-for-lucene-solr/ http://www.flax.co.uk/blog/2014/12/01/a-new-meetup-for-lucene-solr/#respond Mon, 01 Dec 2014 13:41:02 +0000 http://www.flax.co.uk/blog/?p=1319 Last Friday we held the first Meetup for a new Apache Lucene/Solr User Group we’ve recently created (there’s a very popular one for Elasticsearch so it seemed only fair Solr had its own). My co-organiser Ramkumar Aiyengar of Bloomberg provided … More

The post A new Meetup for Lucene & Solr appeared first on Flax.

]]>
Last Friday we held the first Meetup for a new Apache Lucene/Solr User Group we’ve recently created (there’s a very popular one for Elasticsearch so it seemed only fair Solr had its own). My co-organiser Ramkumar Aiyengar of Bloomberg provided the venue – Bloomberg’s huge and very well-appointed presentation space in their headquarters building off Finsbury Square, which impressed attendees. As this was the first event we weren’t expecting huge numbers but among the 25 or so attending were glad to see some from Flax clients including News UK, Alfresco and Reed.co.uk.

Shalin Mangar, Lucene/Solr committer and SolrCloud expert started us off with a Deep Dive into some of the recent work performed on testing resilience against network failures. Inspired by this post about how Elasticsearch may be subject to data loss under certain conditions (and to be fair I know the Elasticsearch team are working on this), Shalin and his colleagues simulated a number of scary-sounding network fault conditions and tested how well SolrCloud coped – the conclusion being that it does rather well, with the Consistency part of the CAP theorem covered. You can download the Jepsen-based code used for these tests from Shalin’s employer Lucidworks own repository. It’s great to see effort being put into these kind of tests as reliable scalability is a key requirement these days.

I was up next to talk briefly about a recent study we’ve been doing into a performance comparison between Solr and Elasticsearch. We’ll be blogging about this in more detail soon, but as you can see from my colleague Tom Mortimer’s slides there aren’t many differences, although Solr does seem to be able to support around three times the number of queries per second. We’re very grateful to BigStep (who offer some blazingly fast hosting for Elasticsearch and other platforms) for assisting with the study over the last few weeks – and we’re going to continue with the work, and publish our code very soon so others can contribute and/or verify our findings.

Next I repeated my talk from Enterprise Search and Discovery on our work with media monitoring companies on scalable ‘inverted’ search – this is when one has a large number of stored queries to apply to a stream of incoming documents. Included in the presentation was a case study based on our work for Infomedia, a large Scandinavian media analysis company, where we have replaced Autonomy IDOL and Verity with a more scalable open source solution. As you might expect the new system is based on Apache Lucene/Solr and our Luwak library.

Thanks to Shalin for speaking and all who came – we hope to run another event soon, do let us know if you have a talk you would like to give, can offer sponsorship and/or a venue.

The post A new Meetup for Lucene & Solr appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2014/12/01/a-new-meetup-for-lucene-solr/feed/ 0
London Search Meetup – Serious Solr at Bloomberg & Elasticsearch 1.0 http://www.flax.co.uk/blog/2014/03/24/london-search-meetup-serious-solr-at-bloomberg-elasticsearch-1-0/ http://www.flax.co.uk/blog/2014/03/24/london-search-meetup-serious-solr-at-bloomberg-elasticsearch-1-0/#respond Mon, 24 Mar 2014 14:21:08 +0000 http://www.flax.co.uk/blog/?p=1167 The financial information service Bloomberg hosted last Friday’s London Search Meetup in their offices on Finsbury Square – the venue had to be seen to be believed, furnished as it is with neon, chrome, modern art and fishtanks. A slight … More

The post London Search Meetup – Serious Solr at Bloomberg & Elasticsearch 1.0 appeared first on Flax.

]]>
The financial information service Bloomberg hosted last Friday’s London Search Meetup in their offices on Finsbury Square – the venue had to be seen to be believed, furnished as it is with neon, chrome, modern art and fishtanks. A slight step up from the usual room above a pub! The first presenter was Ramkumar Aiyengar of Bloomberg on their new search system, accessed via the Bloomberg terminal (as it seems is everything else – Ramkumar even opened his presentation file and turned off notifications from his desk phone from within this application).

Make no mistake, Bloomberg’s requirements are significant: 900,000 new stories from 75,000 sources and 8 million manual searches every day with another 350,000 stored searches running automatically. Some of these stored searches are Boolean expressions with up to 20,000 characters and the source data is also enhanced with keywords from a list of over a million tags. Access Control Lists (ACLs) for security and over 40 languages are also supported, with new stories becoming searchable within 100ms. What is impressive is that these requirements are addressed using the open source Apache Lucene/Solr engine running 256 index shards, replicated 4 times for a total of 1024 cores, on a farm of 32 servers each with 256GB of RAM. It’s interesting to wonder if many closed source search engines could cope at all at this scale, and slightly scary to think how much it might cost!

Ramkumar explained how achieving this level of performance had led them to expose (and help to fix) quite a few previously unknown race conditions in Solr. His team had also found innovative ways to cope with such a large number of tags – each has a confidence value, say 70%, and this can be used to perform a kind of TF/IDF ranking by effectively adding 70 copies of the tag to a document. They have also developed an XML-based query parser for their in-house query syntax (althought in the future the JSON format may be used) and have contributed code back to Solr (for those interested, Bloomberg have contributed to SOLR-839 and are also looking at SOLR-4351).

For the monitoring requirement, we were very pleased to hear they are building an application based on our own Luwak stored query engine, which we developed for just this sort of high-performance application – we’ll be helping out where we can. Other future plans include relevance improvements, machine translation, entity search and connecting to some of the other huge search indexes running at Bloomberg, some on the petabyte scale.

Next up was Mark Harwood of Elasticsearch with an introduction to some of the features in version 1.0 and above. I’d been lucky enough to see Mark talk about some of these features a few weeks before so I won’t repeat myself here, but suffice it to say he again demonstrated the impressive new Aggregrations feature and raised the interesting possibility of market analysis by aggregating over a set of logged queries – identifying demand from what people are searching for.

Thanks to Bloomberg, Ramkumar, Mark and Tyler Tate for a fascinating evening – we also had a chance to remind attendees of the combined London & Cambridge Search Meetup on April 29th to coincide with the Enterprise Search Europe conference (note the discount code!).

The post London Search Meetup – Serious Solr at Bloomberg & Elasticsearch 1.0 appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2014/03/24/london-search-meetup-serious-solr-at-bloomberg-elasticsearch-1-0/feed/ 0
ElasticSearch London Meetup – a busy and interesting evening! http://www.flax.co.uk/blog/2014/02/26/elasticsearch-london-meetup-a-busy-and-interesting-evening/ http://www.flax.co.uk/blog/2014/02/26/elasticsearch-london-meetup-a-busy-and-interesting-evening/#respond Wed, 26 Feb 2014 13:44:43 +0000 http://www.flax.co.uk/blog/?p=1139 I was lucky enough to attend the London ElasticSearch User Group’s Meetup last night – around 130 people came to the Goldman Sachs offices in Fleet Street with many more on the waiting list. It signifies quite how much interest … More

The post ElasticSearch London Meetup – a busy and interesting evening! appeared first on Flax.

]]>
I was lucky enough to attend the London ElasticSearch User Group’s Meetup last night – around 130 people came to the Goldman Sachs offices in Fleet Street with many more on the waiting list. It signifies quite how much interest there is in ElasticSearch these days and the event didn’t disappoint, with some fascinating talks.

Hugo Pickford-Wardle from Rely Consultancy kicked off with a discussion about how ElasticSearch allows for rapid ‘hard prototyping’ – a way to very quickly test the feasibility of a business idea, and/or to demonstrate previously impossible functionality using open source software. His talk focussed on how a search engine can help to surface content from previously unconnected and inaccessible ‘data islands’ and can help promote re-use and repurposing of the data, and can lead clients to understand the value of committing to funding further development. Examples included a new search over planning applications for Westminster City Council. Interestingly, Hugo mentioned that during one project ElasticSearch was found to be 10 times faster than the closed source (and very expensive) Autonomy IDOL search engine.

Next was Indy Tharmakumar from our hosts Goldman Sachs, showing how his team have built powerful support systems using ElasticSearch to index log data. Using 32 1 core CPU instances the system they have built can store 1.2 billion log lines with a throughput up to 40,000 messages a second (the systems monitored produce 5TB of log data every day). Log data is queued up in Redis, distributed to many Logstash processes, indexed by Elasticsearch with a Kibana front end. They learned that Logstash can be particularly CPU intensive but Elasticsearch itself scales extremely well. Future plans include considering Apache Kafka as a data backbone.

The third presentation was by Clinton Gormley of ElasticSearch, talking about the new cross field matching features that allow term frequencies to be summed across several fields, preventing certain cases where traditional matching techniques based on Lucene‘s TF/IDF ranking model can produce some unexpected behaviour. Most interesting for me was seeing Marvel, a new product from ElasticSearch (the company), containing the Sense developer console allowing for on-the-fly experimentation. I believe this started as a Chrome plugin.

The last talk, by Mark Harwood, again from ElasticSearch, was the most interesting for me. Mark demonstrated how to use a new feature (planned for the 1.1 release, or possibly later), an Aggregator for significant terms. This allows one to spot anomalies in a data set – ‘uncommon common’ occurrences as Mark described it. His prototype showed a way to visualise UK crime data using Google Earth, identifying areas of the country where certain crimes are most reported – examples including bike theft here in Cambridge (which we’re sadly aware of!). Mark’s Twitter account has some further information and pictures. This kind of technique allows for very powerful analytics capabilities to be built using Elasticsearch to spot anomalies such as compromised credit cards and to use visualisation to further identify the guilty party, for example a hacked online merchant. As Mark said, it’s important to remember that the underlying Lucene search library counts everything – and we can use those counts in some very interesting ways.
UPDATE Mark has posted some code from his demo here.

The evening closed with networking, pizza and beer with a great view over the City – thanks to Yann Cluchey for organising the event. We have our own Cambridge Search Meetup next week and we’re also featuring ElasticSearch, as does the London Search Meetup a few weeks later – hope to see you there!

The post ElasticSearch London Meetup – a busy and interesting evening! appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2014/02/26/elasticsearch-london-meetup-a-busy-and-interesting-evening/feed/ 0
Introducing Luwak, a library for high-performance stored queries http://www.flax.co.uk/blog/2013/12/06/introducing-luwak-a-library-for-high-performance-stored-queries/ http://www.flax.co.uk/blog/2013/12/06/introducing-luwak-a-library-for-high-performance-stored-queries/#comments Fri, 06 Dec 2013 14:24:35 +0000 http://www.flax.co.uk/blog/?p=1064 A few weeks ago we spoke in Dublin at Lucene Revolution 2013 on our work in the media monitoring sector for various clients including Gorkana and Australian Associated Press. These organisations handle a huge number (sometimes hundreds of thousands) of … More

The post Introducing Luwak, a library for high-performance stored queries appeared first on Flax.

]]>
A few weeks ago we spoke in Dublin at Lucene Revolution 2013 on our work in the media monitoring sector for various clients including Gorkana and Australian Associated Press. These organisations handle a huge number (sometimes hundreds of thousands) of news articles every day and need to apply tens of thousands of stored expressions to each one, which would be extremely inefficient if done with standard search engine libraries. We’ve developed a much more efficient way to achieve the same result, by pre-filtering the expressions before they’re even applied: effectively we index the expressions and use the news article itself as a query, which led to the presentation title ‘Turning Search Upside Down’.

We’re pleased to announce the core of this process, a Java library we’ve called Luwak, is now available as open source software for your own projects. Here’s how you might use it:

Monitor monitor = new Monitor(new TermFilteredPresearcher()); /* Create a new monitor */

MonitorQuery mq = new MonitorQuery("query1", new TermQuery(new Term(textfield, "test")));
monitor.update(mq); /* Create and load a stored query with a single term */

InputDocument doc = InputDocument.builder("doc1")
.addField(textfield, document, WHITESPACE)
.build(); /* Load a document (which could be a news article) */

DocumentMatches matches = monitor.match(doc); /* Retrieve which queries it matches */

The library is based on our own fork of the Apache Lucene library (as Lucene doesn’t yet have a couple of features we need, although we expect these to end up in a release version of Lucene very soon). Our own tests have produced speeds of up to 70,000 stored queries applied to an article in around a second on modest hardware. Do let us know any feedback you have on Luwak – we think it may be useful for various monitoring and classification tasks where high throughput is necessary.

The post Introducing Luwak, a library for high-performance stored queries appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2013/12/06/introducing-luwak-a-library-for-high-performance-stored-queries/feed/ 14