Artificial intelligence (AI) is shaping the future of industries across the globe. Yet, the intricate mathematics and complex theories often associated with AI research can pose significant barriers to its broader adoption. That’s where Red Hat’s newest podcast, “No Math AI”, comes in.
Hosted by Dr. Akash Srivastava, Red Hat chief architect and a pioneer of the InstructLab project, and Isha Puri, Massachusetts Institute of Technology PhD student in AI, “No Math AI” is a monthly podcast designed to make cutting-edge AI research more accessible. Whether you’re an AI practitioner, business leader or a tech enthusiast, this podcast offers insights into the real-world impact of AI advancements on business.
Each episode will break down crucial AI concepts and distill them into actionable takeaways. “No Math AI” makes it easier for enthusiasts and business leaders to understand and embrace AI and incorporate it into their strategy with confidence.
Episode 1: Inference-time scaling and how small models beat the big ones
In the debut episode of “No Math AI”, Dr. Srivastava and Isha are joined by guest speakers and research engineers, Shivchander Sudalairaj, GX Xu and Kai Xu. Together, they dive into a crucial topic that’s making waves in AI performance: inference time scaling.
Our hosts and guest speakers discuss how this technique is unlocking new levels of performance for AI models, enhancing reasoning capabilities, powering agentic AI and helping ensure higher accuracy.
Tune in to the episode below to discover how inference time scaling is transforming AI performance in real-world scenarios and how businesses can use it to stay ahead in the rapidly evolving AI landscape.
You can listen and watch the first episode on Spotify or the Red Hat YouTube channel. Don't forget to join our hosts each month for more insights into how AI can help shape the future of your business.
resource
Get started with AI for enterprise: A beginner’s guide
About the author
Carlos Condado is a Senior Product Marketing Manager for Red Hat AI. He helps organizations navigate the path from AI experimentation to enterprise-scale deployment by guiding the adoption of MLOps practices and integration of AI models into existing hybrid cloud infrastructures. As part of the Red Hat AI team, he works across engineering, product, and go-to-market functions to help shape strategy, messaging, and customer enablement around Red Hat’s open, flexible, and consistent AI portfolio.
With a diverse background spanning data analytics, integration, cybersecurity, and AI, Carlos brings a cross-functional perspective to emerging technologies. He is passionate about technological innovations and helping enterprises unlock the value of their data and gain a competitive advantage through scalable, production-ready AI solutions.
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds