test driven relevance – Flax http://www.flax.co.uk The Open Source Search Specialists Thu, 10 Oct 2019 09:03:26 +0000 en-GB hourly 1 https://wordpress.org/?v=4.9.8 Haystack, the search relevance conference – day 2 http://www.flax.co.uk/blog/2018/04/23/haystack-the-search-relevance-conference-day-2/ http://www.flax.co.uk/blog/2018/04/23/haystack-the-search-relevance-conference-day-2/#respond Mon, 23 Apr 2018 15:23:56 +0000 http://www.flax.co.uk/?p=3798 Two weeks ago I attended the Haystack relevance conference – I’ve already written about my overall impressions and on the first day’s talks but the following are some more notes on the conference sessions. Note that some of the presentations … More

The post Haystack, the search relevance conference – day 2 appeared first on Flax.

]]>
Two weeks ago I attended the Haystack relevance conference – I’ve already written about my overall impressions and on the first day’s talks but the following are some more notes on the conference sessions. Note that some of the presentations I attended have already been covered in detail by Sujit Pal’s excellent blog. Some of the presentations I haven’t linked to directly have now appeared on the conference website.

The second day of the event started for me with the enjoyable job of hosting a ‘fishbowl’ style panel session titled “No, You Don’t Want to Do It Like That! Stories from the search trenches”. The idea was that a rotating panel of speakers would tell us tales of their worst and hopefully most instructive search tuning experiences and we heard some great stories – this was by its nature an informal session and I don’t think anyone kept any notes (probably a good idea in the case of commercial sensitivity!).

The next talk was my favourite of the conference, given by René Kriegler on relevance scoring using product data and image recognition. René is an expert on e-commerce search (he also runs the MICES event in Berlin which I’m looking forward to) and described how this domain is unlike many others: the interests of the consumer (e.g. price or availability) becoming part of the relevance criteria. One of the interesting questions for e-commerce applications is how ranking can affect profit. Standard TF/IDF models don’t always work well for e-commerce data with short fields, leading to a score that can be almost binary: as he said ‘a laptop can’t be more laptop-ish than another’. Image recognition is a potentially useful technique and he demonstrated a way to take the output Google’s Inception machine learning model and use it to enrich documents within a search index. However, there can be over 1000 vectors output from this model and he described how a technique called random projection trees can be used to partition the vector space and thus produce simpler data for adding to the index (I think this is basically like slicing up a fruitcake and recording whether a currant was one side of the knife or the other, but that may not be quite how it works!). René has built a Solr plugin to implement this technique.

Next I went to Matt Overstreet’s talk on Vespa, a recently open sourced search and Big Data library from Oath (a part of Yahoo! Inc.). Matt described how Vespa could be used to build highly scalable personalised recommendation, search or realtime data display applications and took us through how Vespa is configured through a series of APIs and XML files. Interestingly (and perhaps unsurprisingly) Vespa has very little support for languages other than English at present. Queries are carried out through its own SQL-like language, YQL, and grouping and data aggregation functions are available. He also described how Vespa can use multidimensional arrays of values – tensors, for example from a neural network. Matt recommended we all try out Vespa – but on a cloud service not a low-powered laptop!

Ryan Pedala was up next to talk about named entity recognition (NER) and how it can be used to annodate or label data. He showed his experiments with tools including Prodigy and a custom GUI he had built and compared various NER libraries such Stanford NLP and OpenNLP and referenced an interesting paper on NER for travel-related queries. I didn’t learn a whole lot of new information from this talk but it may have been useful to those who haven’t considered using NER before.

Scott Stultz talked next on how to integrate business rules into a search application. He started with examples of key performance indicators (KPIs) that can be used for search – e.g. conversion ratios or average purchase values and how these should be tied to search metrics. They can then be measured both before and after changes are made to the search application: automated unit tests and more complex integration tests should also be used to check that search performance is actually improving. Interestingly for me he included within the umbrella of integration tests such techniques as testing the search with recent queries extracted from logs. He made some good practical points such as ‘think twice before adding complexity’ and that good autocomplete will often ‘cannibalize’ existing search as users simply choose the suggested completion rather than finishing typing the entire query. There were some great tips here for practical business-focused search improvements.

I then went to hear John Kane’s talk about interleaving for relevancy tuning which covered a method for updating a machine learning model in real-time using feedback from the current ranking powered by this model – simply by interleaving the results from two versions of this model. This isn’t a particularly new technique and the talk was somewhat of a product pitch for 904Labs, but the technique does apparently work and some customers have seen a 30% increase in conversion rate.

The last talk of the day came from Tim Allison on an evaluation platform for Apache Tika, a well-known library for text extraction from a variety of file formats. Interspersed with tales of ‘amusing’ and sometimes catastrophic ways for text extraction to fail, Tim described how tika-eval can be used to test how good Tika is at extracting data and output a set of metrics e.g. how many different MIME file types were found. The tool is now used to run regular regression tests for Tika on a dataset of 3 million files from the CommonCrawl project. We’re regular users of Tika at Flax and it was great to hear about the project is moving forward.

Doug Turnbull finished the conference with a brief summing up and thanks. There was a general feeling in the room that this conference was the start of something big and people were already asking when the next event would be! One of my takeaways from the event was that even though many of the talks used open source tools (perhaps unsurprisingly as it is so much easier to talk about these publically) the relevance tuning techniques and methods described can be applied to any search engine. The attendees were from a huge variety of companies, large and small, open and closed source based. This was an event about relevance engineering, not technology choices.

Thanks to all at OSC who made the event possible and for inviting us all to your home town – I think most if not all of us would happily visit again.

The post Haystack, the search relevance conference – day 2 appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2018/04/23/haystack-the-search-relevance-conference-day-2/feed/ 0
London Lucene/Solr Meetup: Query Pre-processing & SQL with Solr http://www.flax.co.uk/blog/2017/06/02/london-lucenesolr-meetup-query-pre-processing-sql-solr/ http://www.flax.co.uk/blog/2017/06/02/london-lucenesolr-meetup-query-pre-processing-sql-solr/#respond Fri, 02 Jun 2017 14:31:32 +0000 http://www.flax.co.uk/?p=3471 Bloomberg kindly hosted the London Lucene/Solr Meetup last night and we were lucky enough to have two excellent speakers for the thirty or so attendees. René Kriegler kicked off with a talk about the Querqy library he has developed to … More

The post London Lucene/Solr Meetup: Query Pre-processing & SQL with Solr appeared first on Flax.

]]>
Bloomberg kindly hosted the London Lucene/Solr Meetup last night and we were lucky enough to have two excellent speakers for the thirty or so attendees. René Kriegler kicked off with a talk about the Querqy library he has developed to provide a pre-processing layer for Solr (and soon, Elasticsearch) queries. This library was originally developed during a project for Germany’s largest department store Galeria Kaufhof and allows users to add a series of simple rules in a text file to raise or lower results containing certain words, filter out certain results, add synonyms and decompound words (particularly important for German!). We’ve seen similar rules-based systems in use at many of our e-commerce clients, but few of these work well with Solr (Hybris in particular has a poor integration with Solr and can produce some very strange Solr queries). In contrast, Querqy is open source and designed by someone with expert Solr knowledge. With the addition of a simple UI or an integration with a relevancy-testing framework such as Quepid, this could be a fantastic tool for day-to-day tuning of search relevance – without the need for Solr expertise. You can find Querqy on Github.

Michael Suzuki of Alfresco talked next about the importance of being bilingual (actually he speaks 4 languages!) and how new features in Solr version 6 allow one to use either Solr syntax, SQL expressions or a combination of both. This helps hide Solr’s complexity and also allows easy integration with database administration and reporting tools, while allowing use of Solr by the huge number of developers and database administrators familiar with SQL syntax. Using a test set from the IMDB movie archive he demonstrated how SQL expressions can be used directly on a Solr index to answer questions such as ‘what are the highest grossing film actors’. He then used visualisation tool Apache Zeppelin to produce various graphs based on these queries and also showed dbVisualizer, a commonly used database administration tool, connecting directly to Solr via JDBC and showing the index contents as if they were just another set of SQL tables. He finished by talking briefly about the new statistical programming features in Solr 6.6 – a powerful new development with features similar to the R language.

We continued with a brief Q&A session . Thanks to both our speakers – we’ll be back again soon!

The post London Lucene/Solr Meetup: Query Pre-processing & SQL with Solr appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2017/06/02/london-lucenesolr-meetup-query-pre-processing-sql-solr/feed/ 0
Boosts Considered Harmful – adventures with badly configured search http://www.flax.co.uk/blog/2016/08/19/boosts-considered-harmful-adventures-badly-configured-search/ http://www.flax.co.uk/blog/2016/08/19/boosts-considered-harmful-adventures-badly-configured-search/#comments Fri, 19 Aug 2016 13:10:10 +0000 http://www.flax.co.uk/?p=3348 During a recent client visit we encountered a common problem in search – over-application of ‘boosts’, which can be used to weight the influence of matches in one particular field. For example, you might sensibly use this to make results … More

The post Boosts Considered Harmful – adventures with badly configured search appeared first on Flax.

]]>
During a recent client visit we encountered a common problem in search – over-application of ‘boosts’, which can be used to weight the influence of matches in one particular field. For example, you might sensibly use this to make results that match a query on their title field come higher in search results. However in this case we saw huge boost values used (numbers in the hundreds) which were probably swamping everything else – and it wasn’t at all clear where the values had come from, be it experimentation or simply wild guesses. As you might expect, the search engine wasn’t performing well.

A problem with both Solr, Elasticsearch and other search engines is that so many factors can affect the ordering of results – the underlying relevance algorithms, how source data is processed before it is indexed, how queries are parsed, boosts, sorting, fuzzy search, wildcards…it’s very easy to end up with a confusing picture and configuration files full of conflicting settings. Often these settings are left over from example files or previous configurations or experiments, without any real idea of why they were used. There are so many dials to adjust and switches to flick, many of which are unnecessary. The problem is compounded by embedding the search engine within another system (e.g. a content management platform or e-commerce engine) so it can be hard to see which control panel or file controls the configuration. Generally, this embedding has not been done by those with deep experience of search engines, so the defaults chosen are often wrong.

The balance of relevance versus recency is another setting which is often difficult to get right. At a news site we were asked to bias the order of results heavily in favour of recency (as the saying goes, yesterday’s newspaper is today’s chip wrapper) – the result being, as we had warned, that whatever the query today’s news would appear highest – even if it wasn’t relevant! Luckily by working with the client we managed to achieve a sensible balance before the site was launched.

Our approach is to strip back the configuration to a very basic one and to build on this, but only with good reason. Take out all the boosts and clever features and see how good the results are with the underlying algorithms (which have been developed based on decades of academic research – so don’t just break them with over-boosting). Create a process of test-based relevancy tuning where you can clearly relate a configuration setting to improving the result of a defined test. Be clear about which part of your system influences a setting and whose responsibility it is to change it, and record the changes in source control.

Boosts are a powerful tool – when used correctly – but you should start by turning them off, as they may well be doing more harm than good. Let us know if you’d like us to help tune your search!

The post Boosts Considered Harmful – adventures with badly configured search appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2016/08/19/boosts-considered-harmful-adventures-badly-configured-search/feed/ 1
Can we fix your Solr or Elasticsearch system in a single day? http://www.flax.co.uk/blog/2016/06/17/fix-your-solr-elasticsearch-system-in-a-day/ http://www.flax.co.uk/blog/2016/06/17/fix-your-solr-elasticsearch-system-in-a-day/#respond Fri, 17 Jun 2016 09:55:55 +0000 http://www.flax.co.uk/?p=3295 Here at Flax, we’re often called in to take a look at existing Apache Solr or Elasticsearch search applications, to suggest improvements, tune-ups or enhancements. It’s impossible for us to know ahead of time what we might find – out-of-date … More

The post Can we fix your Solr or Elasticsearch system in a single day? appeared first on Flax.

]]>
Here at Flax, we’re often called in to take a look at existing Apache Solr or Elasticsearch search applications, to suggest improvements, tune-ups or enhancements. It’s impossible for us to know ahead of time what we might find – out-of-date versions of the software, slow performance on either (or both) the indexing or search side of the application and untidy or incorrect configuration files are all common. We also have to learn something about your particular business or sector – the search needs of an e-commerce company are very different to those of a legal firm, startup or government organisation, for example.

Often we’re asked ‘how long will this take’ before we have any detail of the business, the application or how it has been set up. Our clients are obviously keen to know as soon as possible the potential costs of any work that might be necessary and what impact it might have. Some search specialists will only engage with a client for a minimum period (say, a week) which can be quite a commitment, especially for smaller enterprises, both in terms of budget and staff time. However, we’re quite happy to admit we don’t know how long it will take – yet.

Our approach is very simple. We’ll spend a first day with you, on-site if possible, examining the following things:

  • What’s are the business requirements for search?
  • How is the search engine software hosted & deployed?
  • What does the data to be searched look like? How is it indexed by the search engine?
  • What search features have been used and has this been done correctly?
  • How fast is search? What factors are affecting this?
  • How is search relevance and performance tested?

This is by by no means an exhaustive list, but we’ll do what we can during this first day. At the end of the day we will write a brief report (probably no longer than two pages) detailing what we’ve found and some recommendations. If we find anything simple to fix that will make an immediate improvement, we’ll tell you (and if possible help you do so on-site). We charge a flat rate for this kind of engagement.

Even after this single day, you should now have enough information to make some decisions about improving your search – you could decide to let us help run a search workshop, or ask us to come up with a more detailed and costed improvement plan.

Hopefully you’ll also have realised that with over 15 years experience of building search applications with open source software, we are the right team to help you improve your search. If you need help, get in touch today.

The post Can we fix your Solr or Elasticsearch system in a single day? appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2016/06/17/fix-your-solr-elasticsearch-system-in-a-day/feed/ 0
A suggested approach to running a Site Search Tuning Workshop http://www.flax.co.uk/blog/2016/03/24/suggested-approach-running-site-search-tuning-workshop/ http://www.flax.co.uk/blog/2016/03/24/suggested-approach-running-site-search-tuning-workshop/#respond Thu, 24 Mar 2016 15:25:15 +0000 http://www.flax.co.uk/?p=3159 A series of blogs by Karen Renshaw on improving site search: How to get started on improving Site Search Relevancy A suggested approach to running a Site Search Tuning Workshop Auditing your site search performance Developing ongoing search tuning processes … More

The post A suggested approach to running a Site Search Tuning Workshop appeared first on Flax.

]]>
A series of blogs by Karen Renshaw on improving site search:

  1. How to get started on improving Site Search Relevancy
  2. A suggested approach to running a Site Search Tuning Workshop
  3. Auditing your site search performance
  4. Developing ongoing search tuning processes
  5. Measuring search relevance scores


In my last blog I talked about getting started on improving site search relevancy, including the idea of running a two-day initial workshop. In this blog I cover more detail around what the workshop looks like in terms of structure.

Your reason for improving on site search could be driven by migration to a new platform or a need to improve ‘business as usual’ performance. As such, the exact structure should be tailored to you. It’s also worth remembering that whilst the workshop is the starting point, to get the most from it you will need to spend time in advance to gather all the relevant information you’ll need.

Workshop Overview

Objectives : Spend 30 mins at the start of the day to ensure that that the objectives (for workshop and overall project) are communicated and agreed across the entire project team.

Review the current search set up

It might seem wasteful to spend time reviewing your current set up – especially if you are moving to a new search platform – but ensuring everyone understands what and why you have the set up you have today is essential when designing future state.

It’s useful to break this session further into a Technical Set Up and Business Process. This helps to uncover if there are:

  • Particular search cases that you have developed workarounds for and which you need to protect revenue for – your intent will be to remove these workarounds but do you need to be aware they exist
  • Changes to your content model or content systems that you need to take into consideration
  • Technical constraints that you had in the past that are now gone

Ensuring a common level of understanding helps as the project moves forward.

Review current performance

Ensuring that the team knows how search queries are currently performing again increases buy in and engagement and provides a benchmark against which changes can be measured.

Your metrics will be dependent upon your business and what you currently measure (if you aren’t measuring anything – this would also be a good time to plan out what you should).

Classifying the types of search queries your customers are using is also important: do customers search predominately for single keywords, lengthy descriptors or part numbers? Whilst getting to this level of detail involves manual processes it not only provide a real insight into how your customers formulate queries but helps to avoid the ‘see-saw’ impact of focusing on fixes for some whilst unknowingly breaking others further down the tail.

Develop a search testing methodology

With the information to hand around current search set up and performance, now comes the fun part – figuring out the configuration set ups and tests you want to include as part of that new set up.

If you are migrating to a new platform, new approaches are possible, but if you’re working with existing technology there are opportunities to review and test current assumptions.

Search tuning is an iterative process: impacts of configuration changes are only understood once you start testing and determine if the results are as you expected, so build this into the plan from the start.

Dependent upon timescales and objectives you might chose to make wholescale changes immediately or you might decide to make a series of small changes to be able to test and measure each of them independently. Whichever option is best for you, measuring and tracking changes to your search relevancy scores are critical, tools such as Quepid make this possible (it’s also a great tool for building those collaborative working practices which are so important).

Whilst the focus is around improving search relevancy, excellent search experiences are achieved as a result of the holistic user experience, so remember to consider your UX strategy alongside your search relevancy strategy.

Making plans

Alongside clearly defined objectives you should aim to end the workshop with clearly defined action plans. The level of detail you capture and maintain again depends on your needs but as a minimum you should have mapped out:

  • Initial Configuration Tests
  • Test Search Queries
  • Test Team
  • Ongoing project management (Stand Ups / Project Reviews)

In my next blog I’ll write in more detail about how to audit your current and future search performance.

Karen Renshaw is an independent On Site Search consultant and an associate of Flax. Karen was previously Head of On Site Search at RS Components, the world’s largest electronic component distributor.

Flax can offer a range of consulting, training and support, provide tools for test-driven relevancy tuning and we also run Search Workshops. If you need advice or help please get in touch.

 

The post A suggested approach to running a Site Search Tuning Workshop appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2016/03/24/suggested-approach-running-site-search-tuning-workshop/feed/ 0
How to get started on improving Site Search Relevancy http://www.flax.co.uk/blog/2016/03/18/get-started-improving-site-search-relevancy/ http://www.flax.co.uk/blog/2016/03/18/get-started-improving-site-search-relevancy/#respond Fri, 18 Mar 2016 12:01:59 +0000 http://www.flax.co.uk/?p=3146 A series of blogs by Karen Renshaw on improving site search: How to get started on improving Site Search Relevancy A suggested approach to running a Site Search Tuning Workshop Auditing your site search performance Developing ongoing search tuning processes … More

The post How to get started on improving Site Search Relevancy appeared first on Flax.

]]>
A series of blogs by Karen Renshaw on improving site search:

  1. How to get started on improving Site Search Relevancy
  2. A suggested approach to running a Site Search Tuning Workshop
  3. Auditing your site search performance
  4. Developing ongoing search tuning processes
  5. Measuring search relevance scores

 


You know your search experience isn’t working – your customers, your colleagues, your bosses are telling you – you know you need to fix it, fix something but where do you start?

Understanding and improving search relevancy can often feel like a never ending journey and it’s true – tuning search is not a one-off hit – it’s an iterative ongoing process that needs investment. But the resources, companies and tools needed to support you are available.

Here, I’ll take a quick look at how to get started on your search tuning journey. I’ll be following up in subsequent blog posts with more details of each step.

Getting Started

Like any project, to be successful you need to understand what you want to achieve. The best way is to kick off the process with a multi-functional Search Workshop.

Typically ran over 2 days, this workshop is designed to identify what to focus on and how. It becomes the key to developing ongoing search tuning processes and driving collaborative working across teams.

Workshop Agenda

Whilst the agenda can be adapted to be specific to your organisation, in the main there are 4 key stages to it:

  1. Audit
  2. Define
  3. Testing Approach
  4. Summary

1. Audit – Where are we are now?

Spend time understanding in depth what the issues are. There are many sources of information you can call on:

  • Web Analytics – How are queries performing today?
  • Customer Feedback – What are the key areas that your customers complain about?
  • Known Areas of Improvement – What’s already on your product backlog?
  • Competitive Review – Very important for eCommerce sites – how are your competitors responding to your customers queries?

2. Define – Where do we want to be?

As a team agree what the objectives for the project are:

  • What are the issues you want to address?
  • Are there specific types of search queries you want to focus on?
  • Is a overhaul of all search queries something you want to achieve?
  • What are the technical opportunities you haven’t yet exploited?

3. Testing Approach – What’s the plan of attack?

This is the time to plan out what changes you will make and what methodology for testing and deployment you are going to use.

  • What order should you make your configuration changes in?
  • Are there any constraints / limitations you need to plan around?
  • What resources do you need to support search configuration testing?
  • How are you going to measure and track your changes so you know they are successful?
  • Do you need to build in a communication plan for stakeholders?

4. Summary

Ensure that all actions are captured in a project plan with clear owners and timescales.

Workshop Attendees

Within an organisation multiple teams have responsibility for making search better, so at a minimum a subject matter expert from each team should attend.

Key attendees:

  • Business Owner
  • Search Developer
  • Content Owner
  • Web Analyst

Benefits of the workshop

There are practical and cultural benefits to approaching search in this way:

  • Collaborative working practices across the different disciplines are improved
  • Shared objectives and issues leads to better engagement and understanding of the approach
  • A test and learn approach can be developed with the time between testing iterations reduced
  • The workshop itself is an indicator to the wider business that search is now a key strategic priority and that it is getting the love and attention it needs

In my next blog I’ll cover how to run the workshop in more detail.

Karen Renshaw is an independent On Site Search consultant and an associate of Flax. Karen was previously Head of On Site Search at RS Components, the world’s largest electronic component distributor.

Flax can offer a range of consulting, training and support, provide tools for test-driven relevancy tuning and we also run Search Workshops. If you need advice or help please get in touch.

The post How to get started on improving Site Search Relevancy appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2016/03/18/get-started-improving-site-search-relevancy/feed/ 0
Out and about in search & monitoring – Autumn 2015 http://www.flax.co.uk/blog/2015/12/16/search-monitoring-autumn-2015/ http://www.flax.co.uk/blog/2015/12/16/search-monitoring-autumn-2015/#respond Wed, 16 Dec 2015 10:24:42 +0000 http://www.flax.co.uk/?p=2857 It’s been a very busy few months for events – so busy that it’s quite a relief to be back in the office! Back in late November I travelled to Vienna to speak at the FIBEP World Media Intelligence Congress … More

The post Out and about in search & monitoring – Autumn 2015 appeared first on Flax.

]]>
It’s been a very busy few months for events – so busy that it’s quite a relief to be back in the office! Back in late November I travelled to Vienna to speak at the FIBEP World Media Intelligence Congress with our client Infomedia about how we’ve helped them to migrate their media monitoring platform from the elderly, unsupported and hard to scale Verity software to an open source system based on our own Luwak library. We also replaced Autonomy IDOL with Apache Solr and helped Infomedia develop their own in-house query language, to prevent them becoming locked-in to any particular search technology. Indexing over 75 million news stories and running over 8000 complex stored queries over every new story as it appears, the new system is now in production and Infomedia were kind enough to say that ‘Flax’s expert knowledge has been invaluable’ (see the slides here). We celebrated after our talk at a spectacular Bollywood-themed gala dinner organised by Ninestars Global.

The week after I spoke at the Elasticsearch London Meetup with our client Westcoast on how we helped them build a better product search. Westcoast are the UK’s largest privately owned IT supplier and needed a fast and scalable search engine they could easily tune and adjust – we helped them build administration systems allowing boosts and editable synonym lists and helped them integrate Elasticsearch with their existing frontend systems. However, integrating with legacy systems is never a straightforward task and in particular we had to develop our own custom faceting engine for price and stock information. You can find out more in the slides here.

Search Solutions, my favourite search event of the year, was the next day and I particularly enjoyed hearing about Google’s powerful voice-driven search capabilities, our partner UXLab‘s research into complex search strategies and Digirati and Synaptica‘s complimentary presentations on image search and the International Image Interoperability Framework (a standard way to retrieve images by URL). Tessa Radwan of our client NLA media access spoke about some of the challenges in measuring similar news articles (for example, slightly rewritten for each edition of a daily newspaper) as part of the development of the new version of their Clipshare system, a project we’ve carried out over the last year of so. I also spoke on Test Driven Relevance, a theme I’ll be expanding on soon: how we could improve how search engines are tested and measured (slides here).

Thanks to the organisers of all these events for all their efforts and for inviting us to talk: it’s great to be able to share our experiences building search engines and to learn from others.

The post Out and about in search & monitoring – Autumn 2015 appeared first on Flax.

]]>
http://www.flax.co.uk/blog/2015/12/16/search-monitoring-autumn-2015/feed/ 0