Attended Spring I/O 2026

java dev.to

Spring I/O 2026

I have never attended Spring I/O before and it was my first time ever that I had attended it. It was also my first time here in Europe or Spain. Barcelona is incredibly beautiful. I really loved how walkable the city is. I spent 6-7 hours on the day before just walking around. I went to Park Guell, Sagrada Familla (just the outside as tickets were sold out), the fort near Park Guell. These places were extremely beautiful and lovely. The Gaudi architectures were also very unique. I also went to the L'Aquarium where I saw many unique fishes native to the European waters. The European sturgeon was a dinosaur looking fish and that looked pretty interesting. I also saw a fish take a shit, and then immediately did a flip and ate its own shit.

Where I come from (Singapore), tech conferences are mostly never this big; probably at most 200++. I was surprised to see how big Spring I/O is and how big the Spring community is. Over 1100++ people gathered at the conference, with people coming from all over the world. The opening of the Spring I/O started off with a huge bang, there was a music performance at the start of the conference. This was also something I've never seen before, truly an eye opening experience.

On the 2nd day itself, I joined in for a morning 5km jog event at 7am organised by Jetbrains. To my pleasant surprised, many people (about 35++ out of the registered 48++) showed up despite drinking tons of the night before. Here's a picture of me running.

During the conference itself, not only did I meet some old friends (like Josh, James, Tommy, Jonatan, Rod), I also made new friends like Yannick, Catherine, Cristian, Philip. They were all part of this vibrant community and extended their very warm welcome to me. Yannick forgot to pack running a pair of running shorts but still ran really well in jeans, he told me the other amazing conference he has went to and shared his experiences with me. Catherine shared her strong passionate love of buildpacks and how she absolutely loved it over Dockerfiles. Cristian told me about VoxxedDays CERN he help to organised and how he loved organising such conferences and meetups for the community. Philip shared with his passion on uplifting the community through knowledge sharing workshops. It truly was an enjoyable experience being able to make such new friends.

Something shifted at Spring I/O this year. The conversation wasn't really about Spring. It was about the JVM platform finally catching up to where developers needed it to be, and Spring riding that wave. Project Leyden, virtual threads, GraalVM native image maturing, AI integration. These are platform changes, not framework features.

During the two-day conference on April 14-15, I learnt a lot from the various talks and workshop. Here's what I took away.

Day 1: April 14

Keynote (Juergen Hoeller)

The 9:30 keynote set the tone for the whole conference. Juergen Hoeller didn't do a feature parade. He laid out where the platform is heading, and it touches everything from the JDK itself to how we think about AI in Spring applications.

Project Leyden brings Ahead-of-Time (AOT) caching to the JDK. AOT cache lands in JDK 25, with a more expanded version likely in JDK 29. This is the plumbing that makes Spring's AOT subsystem practical without traditional warm-up costs.

Project Loom keeps getting better. Virtual threads get improvements in JDK 25, and structured concurrency is targeted for JDK 29. The big picture here: virtual threads fundamentally change how we think about blocking. The industry spent years treating blocking as something to avoid (hence the reactive stack's whole existence). Virtual threads flip that. Blocking is fine if the thread is virtual and cheap. We are seeing a renewed focus on the blocking stack because of this, and honestly, it's about time.

JSpecify also showed up. @Nullable as a JSpecify type annotation brings proper null-safety semantics to the platform. Spring is adopting it.

The reactive stack isn't going away. It still gives you first-class concurrency, streaming, cache support, and non-blocking backpressure. However, virtual threads make the blocking stack competitive again for most workloads. If you've been forcing reactive patterns into places where simple blocking I/O would do the job, the platform is telling you to stop.

Spring AI is moving fast. Version 2.0 lays what they call "solid foundations":

  • Native Java SDKs for OpenAI, Anthropic, and Google GenAI (no more hand-rolling REST calls)
  • Better native features: prompt caching, batching, structured output
  • ChatClient refinements including ToolCallAdvisor and ToolSearchTool, plus lots of internal layer improvements
  • New session API
  • A2A (AI-to-AI) integration

Spring AI 2.1 targets "agentic foundations." The AI story is still early, but the direction is straightforward: bring AI into the Spring programming model instead of forcing Spring developers into a separate Python ecosystem.

The keynote also touched on modular monoliths and Modulith. The idea is fewer systems with the same structure. Modulith gives you ArchUnit-based verification, isolated testing, and integration testing with selective change detection (you only test the change and its parent module). This pairs well with Oliver Drotbohm's talk later that day.

Bootiful Spring Boot 4

Session link | Josh Long (Broadcom)

Josh Long doing what Josh Long does: a mix of entertainment and substance. The focus was Spring Boot 4.

Things that caught my attention:

  • Concurrency limits are now a first-class concern. Instead of bolting on rate limiting or semaphores after the fact, you can reason about concurrency at the configuration level.
  • Retryable support got improvements. Retry logic sounds trivial until you hit the edge cases: exponential backoff, jitter, circuit breaker interaction. Having this properly baked in saves real pain.
  • Meta annotations via @Component. You can create meta annotations for implicit bean registration. Small ergonomics win, less boilerplate.
  • Bean Registrar with package-level nullability. Mark everything in a package as non-null via package info, which lines up with the JSpecify adoption from the keynote.
  • WebAuth and One-Time-Token (OTT). WebAuth is the modern auth standard. MFA is still a concern though. OTT gives you a simpler flow alongside WebAuth without forcing users through a full MFA setup.
  • AOT subsystem advances, tied to Spring 6+ and Project Leyden. You can specify database dialects in the AOT config and view the generated AOT code directly. AOT has been a black box for a while. Being able to inspect what gets generated is a big deal for debugging.

Domain-centric? Why Hexagonal- and Onion-Architecture Are Answers to the Wrong Question

Session link | Oliver Drotbohm (Broadcom)

This was my favourite talk of the conference. Both content and delivery.

Oliver builds a story. The talk opens with a simple question: is complexity increasing over time? Yes, obviously. Most of us spend our time changing and maintaining existing applications, not building greenfield stuff. That's the actual challenge.

Then he introduces the key insight: cost of change is proportional to coupling. If changing A requires you to change B and C, then B and C are coupled to A. Not controversial. What is worth thinking about: not all coupling is equal, and decoupling has its own costs. Cohesion is coupling in the right places. Software design is the act of modelling strong cohesion, grouping elements together because they belong together, not because they share a technical layer.

He walks through Hexagonal Architecture (application core with business logic, connected to adapters via dedicated ports) and Onion Architecture (domain at the center, application layer surrounding it, infrastructure on the outside) with respect but also with a pointed question: what problem are we actually solving?

The "technology-free domain" ideal sounds nice in blog posts. In practice it introduces its own complexity. And the real question: what if you have multiple domains? Do you create multiple onions? Multiple hexagons?

This is where it gets good. Rather than one massive onion, think of sliced onions: vertical slices, each with its domain intact. Slices communicate through event buses or Spring beans. If you squint, sliced onions look a lot like modules.

The implications stuck with me:

  • Functional decomposition over technical decomposition. Organise by business capability, not by technical layer.
  • Intrinsic complexity determines accidental complexity. The more complex the domain, the more accidental complexity your architecture introduces. Pick an architecture that minimises this.
  • Encapsulation over organisation. Oliver used an analogy I liked: do you put all your chairs in one room, all your tables in another, and all your lamps in a third? Or do you organise by room, each room having the furniture it needs?

He also introduced jMolecule, a tool for enforcing these structural constraints in code.

The delivery was what made it. Smooth storytelling, every concept grounded in a practical question, no dwelling on theory for theory's sake. I liked this talk the most out of everything at the conference.

Day 2: April 15

DB-first Meets Code-first: Persistence Workflows in IntelliJ IDEA

Session link | Anton Arhipov (JetBrains)

Anton Arhipov walked through IntelliJ IDEA's persistence tooling for both DB-first and code-first workflows (JPA and Spring Data JDBC). A lot of this tooling flies under the radar, which is a shame.

Working with a legacy database is where it gets painful. Migrations with Flyway or Liquibase are fine in theory. Entity generation is where things fall apart. Without some form of domain-driven design, it's hard to know what connects to what. You need to explore the relationships first, before generating entities. You need to read the code.

Schema changes cause entities to drift out of sync. The tool can't tell it's a rename on its own, you have to read the changes, press merge, and it becomes a RENAME.

A question that came up: how do migrations work across git branches? If two people both create migration version 5 independently, you get a conflict the tool can't auto-resolve. That's a workflow problem more than a tooling problem, but better support would be nice.

JPA Buddy was mentioned as a plugin. Its features are being absorbed into IntelliJ directly.

Building Killer AI Agents on Your Spring Stack with Embabel

Session link | Rod Johnson (Embabel)

Rod Johnson, the Spring Framework creator, is now building Embabel: a framework for multi-step AI agents. It's written in Kotlin, but Java users don't need to worry about it. Minimal wrangling.

The core idea is dynamic pathfinding. Given a goal, the framework builds a path through available actions to reach it. At each step, it evaluates what's available at that point in time. You don't have to specify conditions explicitly; the framework looks at the expectations and figures out the conditions itself.

One practical point that resonated: don't give all your tools to the LLM. The more tools you expose, the more confused it gets. Be selective about what the agent can reach at each step. Tool calling in Embabel helps keep the LLM grounded. You create tools, the framework decides when to use them.

Interesting talk, but short. I wanted more depth on how the pathfinding actually works and what the trade-offs look like in practice.

Prepare Your Next Spring Boot Migration (Workshop)

Session link | Tim te Beek & Merlin Bogershausen (Moderne)

Hands-on workshop covering Spring Boot migration from 2.x/3.x to 4.x using OpenRewrite, an open-source auto-refactoring tool. Spring Boot 4 brings Hibernate 7, Jakarta EE 11, Spring Security 7, and Java 17+ with it. It's a coordinated upgrade, not a single dependency bump.

The interesting bit: writing migration scripts that agents can augment and help with. They gave out API keys for hands-on exercises.

It was fine. Not a standout, but the agent-augmented migration idea is worth watching.

Reduce LLM Calls with Vector Search Design Patterns

Session link | Raphael De Lio (Redis)

The premise is simple: LLMs are expensive, slow, and energy-hungry. Not all context is good context. Precision drops as you use up more context, even with a larger context window.

Raphael De Lio covered three patterns using Redis vector search to reduce LLM dependency.

Semantic classification. Classify intent with vector search instead of prompts. Prepare your data (what are we classifying?), vectorise it, and use similarity search to match intent without calling an LLM.

Semantic tool calling. Map a vector result to an agent tool. "What's the weather today" maps to tool_get_weather. The LLM never needs to decide which tool to call; vector search handles the routing. There's also tool calling chunking for compound queries. Something like "Hey, I had a bad day yesterday because of the terrible weather. Will it rain today?" won't hit a naive cache. Chunking based on sentence boundaries decomposes it into cacheable parts.

Semantic caching. User prompt gets vectorised, similarity search runs, if it's similar enough, return the cached response. You could even proactively generate responses for common queries.

I had a question about user history and PII: if you want user-specific data as part of the query, can you still do semantic caching? The answer was that you can tag user ID as metadata and filter by PII, but it wasn't fully answered in a satisfactory manner. Something I'll have to dig into myself.

He also covered AI advisors (interceptors that handle requests/responses in Spring AI apps; they can enrich or cancel requests) and the retrieval optimiser (give it a dataset, a classifier, test results, and it tells you what the threshold should be). The SemanticGuardrailAdvisor checks whether a request is allowed to proceed.

Spring Native: The Future of Fast and Efficient Spring Applications

Session link | Alina Yurenko (Oracle)

I've been following GraalVM native image for a while. According to the State of Spring survey, 37% of Spring users either already run natively compiled apps in production or are evaluating it.

The elephant in the room: is native image actually slower than peak JVM? The JVM does Just-In-Time (JIT) optimisation based on hotspots at runtime. AOT native compilation doesn't have that. However, Profile-Guided Optimisation (PGO) changes the picture. You use the --instrument flag to collect profiling data, then feed it back into the build. Two-step PGO on naive benchmarks actually outperforms even ML-optimised builds. Promising.

Practical bits worth knowing:

  • -H:Preserve for zero-config migration. AOT might strip non-configured libraries; this preserves elements from a given package. Useful when third-party libs aren't explicitly registered.
  • SBOM generation via --enable-sbom=. Native image can produce a Software Bill of Materials, increasingly relevant for supply chain security.
  • WebAssembly compilation is still in progress.
  • GraalVM native image layers, similar in concept to Docker layer caching.
  • Project Crema: "open world" for native image, addressing the closed-world assumption that's been a limitation of AOT compilation.

Also mentioned: Tamboui, used for building modern terminal user interfaces.

The delivery was standard. Content was fine. The talk covered a lot of ground but each topic could have used more depth. Felt like a survey when I wanted a deep dive.

I Can See Clearly Now: Observability of JVM and Spring Boot 2-3-4 Apps

Session link | Jonatan Ivanov & Tommy Ludwig (Broadcom)

What is observability? How well we can understand the internals of a system based on its outputs. Your app runs with or without observability. But when things go wrong, it's the difference between debugging and guessing.

The talk used a tea system as a framing device. Yes, tea, the drink. Surprisingly effective. The tea system illustrated three things:

  1. Things get slow, and detecting slowness is harder than detecting failures.
  2. Unknown unknowns. You can't alert on what you didn't think to measure.
  3. Observer perception differs. Everything can be broken for users while your dashboards look green. The error on the user's frontend is often not the error the server reports.

The tooling covered: Micrometer, OpenTelemetry, Grafana, OpenZipkin, Prometheus, OpenMetrics. The emphasis was on vendor-neutral instrumentation. Instrument once, export to whatever backend you want.

Good talk. The story-based approach kept things engaging.

Closing Thoughts

Spring IO 2026 felt like the ecosystem hitting a stride. Less flash, more convergence. JDK improvements (Leyden, Loom) are fixing things the JVM was historically bad at. GraalVM native image is finding practical answers to its limitations. Spring AI is moving from experimentation to something you can actually build production features on. Observability is getting the attention it always deserved.

My highlight was Oliver's architecture talk. Not because the ideas were new (they've been floating around), but because the delivery was exceptional and the framing was precise. "Encapsulation over organisation" is a principle I'll be carrying into my own projects.

Great thanks to Sergey and the volunteer team for making Spring I/O such a great and fantastic event. I loved the content and the community.

Also, huge thanks to the sponsors who helped made this event possible.

Did you also attend Spring I/O? How did it felt? Leave your comments below!

Source: dev.to

arrow_back Back to Tutorials