How EBSCO Ensures the Accuracy of Its AI
EBSCO conducts quality assessments on AI responses to ensure the quality is met and does not degrade over time. For every stage in the AI pipeline, quality can be measured, steps can be taken to increase quality, or unintended errors can be introduced that reduce quality. This is why it is critical to assess quality at every stage, in addition to other measures such as biases, cost, environmental impact, equality, and more on a regular basis and to be transparent in how we assess these.
We use the following techniques to ensure the quality of AI feature outputs.
Retrieval-Augmented Generation (RAG)
The AI is built on a foundation of authoritative content, which significantly reduces the occurrence of 'hallucinations'—instances where AI generates inaccurate or unfounded information.
By grounding its responses in verified, comprehensive information, the AI is better equipped to produce reliable, contextually accurate, and trustworthy outputs. This approach ensures that users can have greater confidence in the information provided.
System assessments such as latency (how slow the AI was in completing its task), up/down-time (how reliable is the system when you need to use it), cost and environmental efficiency (responsibility to frugality and the planet), security and privacy guardrails, prompt engineering peer review (helps decrease biases), temperature control (sort of like the confidence threshold for an AIs responses), and much more system-level guardrails.
Rigorous Vetting by Librarians and Subject Matter Experts (SMEs)
Our AI features are rigorously tested and carefully vetted by a diverse group of users, including researchers, librarians, and educators, to ensure they are both effective and responsibly integrated into the research process.
This comprehensive testing process helps us gather valuable feedback and refine each feature to meet the highest standards of accuracy, usability, and ethical responsibility.
By involving end users throughout development, we ensure that our AI tools genuinely support and enhance the research journey, offering insights and efficiencies that respect the complexity and integrity of academic inquiry.
This commitment to responsible AI ensures that our technology aligns with the needs and expectations of the scholarly community, fostering trust and reliability at every step.
Learn more about our industry expertise
Sample Rubric for Assessing AI Responses
A sample rubric that EBSCO uses for AI response assessment measures:
- Timeliness: Is the information presented in the Insight current and not out-of-date information?
- Tone: Does the information in the Insight match the tone in the article?
- Terminology: Does the terminology in the Insight match what is in the article?
- Accuracy: Is the information in the Insight accurate based on the details found in the article?
- Thematic: Are the main themes from the article covered in the Insight?
- Usefulness: Was the Insight useful as supplemental material to the abstract and/or research?
EBSCO has always been dedicated to high-quality, trustworthy data, and AI quality is no different.

Stay Informed
Contact us to learn more about AI at EBSCO, sign up for our AI beta programs, or collaborate with us on research and development initiatives.