Quepid & Flax – if you’re not testing your search, you’re doing it wrong!

Earlier this year an e-commerce company asked us to look into how they should improve how they tested their website search queries. A relatively simple task you might think – but the company concerned has a turnover of over a billion pounds with at least half of this via digital channels, so measuring how well search works is essential to preserve revenue. Like (I suspect) many others, they were recording the results of thousands of test searches, carried out manually by their staff in several different languages, in spreadsheets – which worked, but made it very slow to improve search results. It was also often the case that a configuration change made to address one problem would negatively affect another set of results. This is an issue we’ve seen many times before.

I’ve known the guys at OpenSource Connections (OSC) for several years now – working out of Charlottesville, Virginia, like Flax they provide expertise in search and related technologies. Last year they’d shown me an early version of Quepid, a browser-based tool for recording relevance judgements. This tool seemed like the perfect fit and we began to work with OSC to add various enterprise features for the aforementioned client. Along the way, Quepid gained compatibility with both Elasticsearch and Solr and many user interface improvements and is now in daily use at the client’s site. As it can be used by both business users (to rate searches) and developers (to adjust search configuration and to instantly see the effect on those ratings, across the board) it helps to develop a fast feedback loop for improving relevance.

I’m very glad to annnounce that we’re now announcing a full partnership with OSC and will be offering Quepid to all our clients (let me know if you want a demo!). We’ll also be talking about test-driven relevance tuning over the next few months – I’m particularly looking forward to the publication of this book co-written by Quepid developer Doug Turnbull.

If you’re not measuring how good your search is performing, you simply have no idea if your search engine is correctly configured. Too often, changes to search are driven by the HiPPO, by reacting to customer feedback without considering the effects of this across the whole system, or simply by dropping in a new technology and assuming this will fix everything. We can change this, by introducing test-driven relevance tuning.

Share this postShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on RedditEmail this to someone

Leave a Reply

Your email address will not be published. Required fields are marked *