A series of blogs by Karen Renshaw on improving site search:
- How to get started on improving Site Search Relevancy
- A suggested approach to running a Site Search Tuning Workshop
- Auditing your site search performance
- Developing ongoing search tuning processes
- Measuring search relevance scores
In my last blog I talked about creating a framework for measuring search relevancy scores. In this blog I’ll show how this measurement can be done with a new tool, Quepid.
As I discussed, it’s necessary to record scores assigned to each search result based on how well that result answers the original query. Having this framework in place is necessary to ensure that you avoid the ‘see-saw’ effect of fixing one query but breaking many others further down the chain.
The challenge with this is the time taken to re-score queries once configuration changes have been made – especially given you could be testing thousands of queries.
That’s why it’s great to see a tool like Quepid now available. Quepid sits on top of open source search engines Apache Solr and Elasticsearch (it can also incorporate scores from other engines, which is useful for comparison purposes if you are migrating) and it automatically recalculates scores when configuration changes are made, thus reducing the time taken to understanding the impact of your changes.
Business and technical teams benefit
Quepid is easy to get going with. Once you have set up and scored an initial set of search queries (known as cases), developers can tweak configurations within the Quepid Sandbox (without pushing to live) and relevancy scores are automatically recalculated enabling business users to see changes in scores immediately.
This score, combined with the feedback from search testers, provides the insight into how effective the change has been – removing uncertainty about whether you should publish the changes to your live site.
Improved stakeholder communication
Having figures that shows how search relevancy is improving is also a powerful tool for communicating search performance to stakeholders (and helps to overcome those HIPPO and LIPPO challenges I’ve mentioned before too). Whilst a relevancy score itself doesn’t translate to a conversion figure, understanding how your queries are performing could support business cases and customer metric scores.
Test and Learn
As the need to manually re-score queries is removed, automated search testing is possible and combined with greater collaboration and understanding across the entire search team means that the test and learn process is improved.
Every organisation has a different objective when it comes to improving search, but Quepid is designed so that it can support your organisation and requirements:
- Choose from a range of available scorers or create your own
- Set up multiple cases so that you can quickly understand how different types of queries perform
- Share cases amongst users for review and auditing
- Download and export cases and scores
- Assist with a ‘deep dive’ into low scoring queries
- Identify if there are particular trends or patterns you need to focus on as part of your testing
- Create a dashboard to share with category managers and other stakeholders
Karen Renshaw is an independent On Site Search consultant and an associate of Flax. Karen was previously Head of On Site Search at RS Components, the world’s largest electronic component distributor.
Flax can offer a range of consulting, training and support, provide tools for test-driven relevancy tuning and we also run Search Workshops. If you need advice or help please get in touch.