Case Study: AI Seer × Google Cloud
Industry: Technology Location: Singapore
Overview
AI Seer partnered with Google Cloud to scale Facticity.AI — a high-accuracy, multi-modal fact-checking platform — making real-time truth verification accessible to a global audience.
By leveraging Cloud Run, Artifact Registry, Vertex AI, and Gemini models, AI Seer achieved:
66.67% higher request throughput (vs Nov 2024)
Latency reduction from 14,693 ms to 17 ms
Faster development cycles and lower operational costs
Significant reduction in AI hallucinations through multi-LLM validation and human feedback loops
This collaboration enabled AI Seer to deploy production-grade fact-checking infrastructure while maintaining transparency, explainability, and reliability at scale.
The Challenge
In an environment where misinformation spreads faster than verification, AI Seer needed to:
Scale fact-checking during traffic spikes
Reduce latency without sacrificing accuracy
Minimize hallucinations from generative AI systems
Support rapid experimentation with a lean team
Traditional infrastructure and single-model AI approaches were insufficient for production-grade truth verification.
The Solution
Using Google Cloud’s serverless and AI stack, AI Seer built a scalable and resilient system:
Cloud Run & Artifact Registry Containerized fact-checking services with auto-scaling and fast rollouts.
Vertex AI & Gemini Multi-LLM orchestration to cross-validate outputs, reduce hallucinations, and optimize cost-performance.
Human Feedback Loops Community moderation and expert review dynamically refine model outputs over time.
This architecture allowed AI Seer to handle surges in queries while maintaining high accuracy and explainability.
The Impact
66.67% increase in requests served per second
Latency reduced from ~14.7s to 17ms
Faster feature releases and shorter development cycles
Improved collaboration across technical and non-technical teams
Lower inference and infrastructure costs
“Our biggest concern with adopting generative AI is hallucinations. By using a multi-LLM approach with continuous feedback loops, we significantly reduce hallucinations and improve reliability.” — Dennis Yap, Founder, AI Seer
Why This Matters for ArAIstotle
The same infrastructure and methodologies power ArAIstotle, including:
Real-time, source-backed verification
Multi-model cross-checking
Community feedback and moderation (Truth-to-Earn)
High-availability performance during viral events
This case study demonstrates that ArAIstotle is not an experimental bot — it is built on enterprise-grade infrastructure designed for accuracy, scale, and trust.
Looking Ahead
AI Seer continues to build on Google Cloud, exploring:
Gemini 2.0 Flash for faster inference
Expanded browser and document integrations
Deeper multimodal and behavioral analysis
Founded in 2019, AI Seer remains committed to becoming the filter for misinformation by embedding authentication directly into information streams.
Read full case study: https://cloud.google.com/customers/ai-seer
Last updated

