đŁïž What AI Teams Spend Their Time On in 2025, Thoughts On Desktop Agents, HTAP for Agents, GenAI Reality Check, Go Embeddings
Community Insights #10: What tasks do AI teams actually spend their time on these days? Desktop agents, HTAP for agent data architectures, GenAI hype curve, Go-native embeddings.
Every other Thursday, we surface the most thought-provoking, practical, and occasionally spicy conversations happening inside the AI communities so you never miss out. The MLOps Community Slack is where the sharpest builders and tinkerers ask the hard questions and don't settle for surface-level answers.
In this edition of đŁïž BuildAIers Talking đŁïž, here are five conversations you donât want to miss:
What are AI and software teams actually spending time on these days? Are they focusing on agents, predictive ML, voice, or all at once?
Are browser-native desktop agents useful or just demos?
Can a single database power OLTP, OLAP, and RAG?
Where are we on the GenAI hype curve? At the peak or plateau?
How do you use embeddings in Go without Python?
Grab a coffee â and dive in, learn what others are actually shipping, and decide whatâs hype vs. helpful. đ
đ What Tasks Are AI Teams Spending Their Time On in 2025: Agents, Predictive ML, GenAI, or Voice?
Channel: #mlops-questions-answered
Asked by: Demetrios
For a talk heâs writing, Demetrios ran a quick vibe check on where AI devs, engs, and teams are investing effort in 2025:
Poll Results:
đ„ïž Predictive ML (recsys, fraud, search): đ„with 22 votes
đ€ Agentic use cases: 14 votes
đ„ Blend of everything: 11 votes
đ§Ș GenAI (image/video): 6 votes
đïž Voice AI: 6 votes

Key Insights
Andrey: âWhere is the option for meetings and alignment?â (We feel you. đ)
Arham: Migrating BERT â LLM-based classifiers.
Médéric: Platform teams spend most time supporting agentic use cases; ML is more mature and self-serve.
Sammer: Still laying infrastructure foundations for predictive workloads.
Editorâs Takeaways
Despite the buzz, predictive ML is still where much of the real work happens. Agentic workflows are rising, but they add complexity, eval overhead, and orchestration needs.
Many teams are juggling hybrids, but reliable, well-understood ML systems still drive the most consistent value.
đ See poll + replies â âïž
đ€ Can âDesktop Agentsâ Actually Click, Type, and Ship? Or Is RPA 2.0 Still a Pipe Dream?
Channel: #agents
Asked by: Barrett Burnworth
Can an agent reliably âuseâ your desktop like a human by clicking through UI, completing workflows, and scaling across tasks?
Bytebot claims to run âdesktop agentsâ in the cloud, which means it spins up fresh sandboxed machines to complete tasks by clicking through the UI. Sounds magical. But is it useful in production?

Key Insights
Mehdi B isnât convinced: the idea resurfaces repeatedly with browser automation, and reliability remains a blocker.
Demetrios wants it to work, but watching a cursor click through the DOM is slow and fragile. Steering the agent mid-flow defeats the point.
David DeStefano prefers Selenium/RPA for reliability: âIf I need guarantees, Iâll script it, not watch it try.â
Barrett asks whether AI should adapt to our current situation or whether we should redesign workflows around it (like factories did postâsteam engines).
Editorâs Takeaways
Desktop agent demos are compelling, but most teams still prefer deterministic automation when reliability matters. The big unlock might come not from emulating humans but from rethinking interfaces and workflows for AI-native control.
Until agents get faster, safer, and more controllable, RPA/scripts win for production. Because the question isnât âCan it click?â itâs âCan we trust it at scale?â
đ Join the conversation â âïž
đ§± One Database for OLTP, OLAP, and RAG: Is HTAP Finally Practical for Agents?
Channel: #discussions
Asked by: Médéric Hurier (Fmind)
Médéric is exploring a unified HTAP (Hybrid Transactional/Analytical Processing) architecture for agents.
If youâre building agents that need fast transactions (OLTP), analytics (OLAP), and vector search (RAG) in one system, should you go all-in on HTAP with something like AlloyDB for PostgreSQL or decouple with tools like DuckDB + Postgres + pgvector?

Key Insights
Cody Peterson: Suggested querying Postgres via DuckDB (or via a read replica to avoid OLTP contention) and even doing RAG inside DuckDB when needed.
Médéric: While Postgres + pgvector works well for OLTP + RAG, traditional OLAP at large scale is where columnar engines shine. Google claims AlloyDB is up to 100x faster on some OLAP workloads.
Ray Lucchesi: Why not Postgres + pgvector for most needs? MĂ©dĂ©ricâs reply: At scale, row-based Postgres is solid for OLTP and vector search but not built for large-scale OLAP latency, hence the HTAP interest.
Editorâs Takeaways
HTAP (Hybrid Transactional/Analytical Processing) is attractive for agents that need fast reads/writes + analytics + semantic retrieval, but itâs not a free lunch.
Many teams still prefer DuckDB + Postgres (with replicas) for flexibility, cost control, and operational simplicity.
If you want full HTAP in one managed system, AlloyDB may be worth piloting, especially if youâre in the Google ecosystem.
đ Check out the thread â âïž
đ Has GenAI Hit the Peak? Or Are We Still Climbing the Hype Curve?
Channel: #discussions
Asked by: Berndt Lindner
Where are we on the Gartner hype cycle for GenAI? Are GenAI valuations cooling, or is the bubble still inflating?

Key Insights
Mehdi B: Expect a correction, not a collapse like post-dotcom, strong players will emerge.
Ray Lucchesi: This isn't as crazy as dotcom-level froth because the PE ratios aren't as wild.
Laszlo Sragner: Not in a full bubble yet. You usually see âescape velocityâ frenzy before the crash.
Varshit Dusad: Agents may not survive broadly due to cost and reliability. Long-term winners: RAG, code completion, and transcription/translation. Expect model quality tradeoffs as economics tighten.
Editorâs Takeaways
Weâre likely in the realism (or if you prefer, âmaturingâ) phase. So you should still expect a ton of experimentation and new use cases to pop up.
But pragmatic GenAI applications (RAG, code assist, structured workflows) seem to be winning mindshare over âautonomous everythingâ narratives.
đ Explore the thread â âïž
đč Whatâs the Best Way to Use Embeddings in Go Without Python?
Channel: #llmops
Asked by: Zachary Royals
Zach asked for Go-native embedding/tokenization tooling, ideally without spinning up Python services.
Key Insights
raybuhr: Recommended sugarme/tokenizer (Hugging Faceâinspired) and noted Go + TensorFlow is possible with cgo.
Mehdi B: shared Genkit for Go with a pgvector sample, though itâs somewhat Firebase-centric.
Pragmatic fallback: Several members still prefer a thin Python microservice for embeddings math, called over HTTP/gRPC, especially when maturity and model variety are needed.
Editorâs Takeaways
If you want to stay 100% Go, libraries like sugarme/tokenizer are your best bet today. For production-grade embeddings across providers, a hybrid approach (Go + Python microservice) is still common until Goâs ecosystem matures further.
đ See the answers â âïž
Which community discussions should we cover next? Drop suggestions below!
We created ⥠BuildAIers Talking to be the pulse of the AI builder communityâa space where we share and amplify honest conversations, lessons, and insights from real builders. Every other Thursday!
đ Key Goals for Every Post:
â Bring the most valuable discussions from AI communities to you.â Build a sense of peer learning and collaboration.
â Give you and other AI builders a platform to share your voices and insights.
The discussions and insights in this post originally appeared in the MLOps Community Newsletter as part of our weekly column contributions for the community (published every Tuesday and Thursday).
Want more deep dives into MLOps conversations, debates, and insights? Join the Slack community here; Slack sign-up is required.
Subscribe to the MLOps Community Newsletter (20,000+ builders) delivered straight to your inbox every Tuesday and Thursday! đ
Know someone who would love this? Share it with your AI builder friends!



