Cognito AI Search 1.1.0 - Where Precision Meets Polish

Somewhere between the speed of thought and the rigor of research lies a space where every query, every formula, and every click feels perfectly weighted. Version 1.1.0 steps into that space, and invites you to work there.

Watch Numbers Come Alive

Type “^{pi}sin(x),dx and see the integral bloom into beautifully typeset notation the instant you press Enter. No raw backslashes, no mental gymnastics, just mathematics rendered as if it rolled off a LaTeX press. Under the hood, remark-math and KaTeX plug directly into the React render pipeline, so inline snippets and multi-line derivations look as polished as the equations in your most-cited paper. Whether you’re debugging a signal-processing filter or walking a colleague through Black-Scholes, Cognito AI Search now speaks math fluently, without you lifting an editor.

“It feels like jotting on a whiteboard, only the board understands calculus.”

Flow That Disappears

Animations used to be eye-candy. In 1.1.0 they are strategy. Cards glide into view on a staggered cadence, 150 ms for the AI narrative, 300 ms for curated web sources, so attention lands where it belongs, when it belongs. Transitions shift opacity and transform properties only; the GPU handles the rest at a rock-solid 60 fps. The result? A page that never blanks, judders, or jars. You move from query to answer as seamlessly as turning a page.

  • 500 ms coordinated fade-outs eliminate “flash of loading” artefacts
  • State is now centralised; whether you launch from a suggestion or the main bar, context stays intact
  • Mobile gestures inherit the same easing curves, nothing special to build, nothing special to learn

Visual hierarchy has evolved, too. The playful brain icon bows out; a crisp Lucide “spark” takes its place, signalling focus without whimsy. Colours shift to a restrained palette that lets content, not chrome, steal the stage.

Performance You Notice by Not Noticing

Build time holds at 2.0 s. That’s 71 % faster than the baseline, but raw numbers miss the story. Nineteen orphaned files, four unused components, and a forest of duplicate types are gone, slicing the stylesheet nearly in half, from 481 to 244 lines. Less CSS means fewer bytes on the wire, fewer calculations in the browser, and a page that paints before you finish blinking.

Meanwhile React components shed nested conditionals and redundant hooks. The AI answer renderer, for instance, collapsed from eight branches to three. That clarity isn’t just elegant; it shortens the critical path and gives the garbage collector less to sweep.

IPv6 & Networks Without Borders

Your data centre already dual-stacks; your search tool now does the same. Native IPv6 support threads through every request, from the first SearXNG POST to the last markdown fetch. No hacks, no NAT awkwardness, just global addresses handling global ideas.

For those running air-gapped clusters, 1.1.0 respects your topology. Environment variables now slot cleanly into the Dockerfile, avoiding the “rebuild-when-config-changes” anti-pattern. Drop new creds, restart in seconds, move on.

Built for Builders

Containers get love, too. Stage names inside the Dockerfile finally match what your CI expects, so multi-stage caching works instead of “almost” works. Node rides the latest 24-alpine image: leaner, faster, with long-term security patches baked in. Tailwind CSS jumps to v4, pruning legacy directives. You get modern features, container queries, logical properties, without a rewrite.

  • New SearXNG POST endpoint reduces latency when self-hosting the engine behind a VPN
  • Error boundaries wrap every network call; a failing source no longer torpedoes the thread
  • Configuration lives in one file, version-controlled and human-parsable, no more treasure hunts

Metrics That Matter

Metric Before 1.1.0
Build time (cold) 7.0 s 2.0 s
Stylesheet lines 481 244
Animations dropped on frame 4 % < 1 %
Unused files 19 0
Breaking changes 2 (minor) 0

Translation: faster pipelines, lighter payloads, smoother UI, without a single breaking API.

Migration, Simplified

Upgrading could fit in a tweet:

docker pull kekepower/cognito-ai-search:1.1.0
docker-compose up -d

If you pinned environment variables in a .env file, they slide straight through. KaTeX packages install automatically on first run; nothing else to tinker. Fire a query, enjoy the view.

Why 1.1.0 Exists

The 1.0.0 launch proved that a private, local-first blend of AI and metasearch could feel friendly. Feedback poured in, from data-science labs, embedded-systems teams, and a surprising number of quantitative-finance desks. The ask was consistent: “Give us the same autonomy, but polish every edge.”

Version 1.1.0 is that polish. It doesn’t add noise; it removes friction. The new math engine? Because engineers showed screenshots of LaTeX soup in 1.0. The animation overhaul? Because product owners demoing to stakeholders needed a UI that felt finished. The build-time cuts? Because CI minutes cost real money, even in a self-hosted world.

Ready to Explore?

Drop 1.1.0 behind your firewall, point it at your own SearXNG instance, and let your queries breathe in private. Or skim the source, fork the repo, and make the interface your own. The license is MIT; the roadmap is public. The conversation starts at GitHubExternal site icon, and continues wherever you deploy.

“Search finally feels like part of the stack, not a paid gateway to it.”

Cognito AI Search 1.1.0: because the shortest path from question to insight shouldn’t detour through someone else’s server logs.