Expert Search Early Adopters pilot – pre-switch search data
About the pilot to understand how to move to using provider interfaces (EBSCOhost, Proquest and Ovid).
In mid-September Health Education England (HEE) began a pilot to understand how best to help Knowedge and Library Services (KLS) in England move from using Healthcare Databases Advanced Search (HDAS) to using provider interfaces (EBSCOhost, Proquest and Ovid).
Phase 1 of data collection ended in November 2020 and we’re able to share some of the results from our survey around pre-switch searches (carried out primarily on Healthcare Databases Advanced Search (HDAS)). This was undertaken to capture ‘normal’ search behaviour, so we could do some comparing and contrasting with searches carried out on provider interfaces after the pilot groups switched. The data collected is a great snapshot of search activity and is fascinating reading if you’re interested in search behaviour.
We had 68 searches recorded during this phase. We asked participants to briefly describe their search – purpose, level of complexity. As expected, topics were wide ranging and search requesters were from a multitude of staff groups.
The most frequently used resources were HDAS Medline (76% of searches), HDAS Cumulative Index to Nursing and Allied Health Literature (CINAHL) (63% of searches) and HDAS The Excerpta Medica dataBASE (EMBASE) (50% of searches). Results were collated using reference management for 13% of searches, and Endnote Desktop was the most frequently used reference management tool.
Exactly 50% of searches were completed in a single session and the other 50% over multiple sessions. The time it took to complete a search varied wildly, with the shortest taking just 20 minutes and the longest 15 hours – this was a search to support a systematic review. There were 2 searches captured that fitted into the systematic review category, and as their times vastly skewed the average search time they were removed from calculations. With the remaining 66 searches the average time to complete was 2 hours 51 minutes, with most taking between 1-2 hours or 2-3 hours (44% and 29% respectively).
We asked participants to tell us what had gone well with their search, what didn’t go so well, and what changes could be made to improve their search experience. Things that worked well included being able to search multiple resources without switching interfaces, being able to collate results and search history into one document for the search requester, and searches where the topic was straightforward and therefor easy to find results for. There were common issues around glitches in HDAS, de-duplicating results and the search topic either proving difficult to search for, or being outside of the scope of the databases available to the searcher. Possible improvements included increased stability (fewer interface glitches), less scrolling and a cleaner interface, and having access to reference management software to de-duplicate and collate results.
Finally we asked people to rate their search experience a star rating, where 1 is poor and 5 is excellent. The average rating was 3.7, with 34% rating 3 stars, 42% rating 4 stars and 21% rating 5 stars.
Phase 2 of data collection is now well under way, and we are asking participants to fill out a similar survey for any searches they carry out on one specific day of their working week. We’ll be sharing the results from this phase over the next few months.
For any questions about the project, please contact Emily Hurt.