Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,640
|
Comments: 51,255
Privacy Policy · Terms
filter by tags archive
time to read 7 min | 1339 words

I still remember the bookstore. I was holding a 600-page brick of a book on how to build Windows applications, trying to convince my mother that I really needed it. This was 1994 or 1995. A book was how you learned to program at that time. You took it home, you read it cover to cover, you typed the examples by hand, and somewhere along the way, the ideas sank in.

From there, the tools for learning kept evolving. Printed books gave way to CD-ROMs and then to online documentation. Then came the explosion of blogs and RSS feeds. I started this blog at that time, and I still consider that era to be one of the best ones in terms of having amazing access to smart and knowledgeable people, freely sharing their insights and experiences.

Google killed Google Reader (yes, I am still angry about that) and a lot of the new people learned via Stack Overflow. The world entered a strange equilibrium that lasted, honestly, more than a decade. If you learned to code any time between roughly 2010 and 2022, you probably learned through some combination of Google, Stack Overflow, and maybe YouTube.

Then the floor moved again. First it was ChatGPT, where you copy-pasted code back and forth. Then the models were integrated into the IDE. Now, with Claude Code and Codex, it is something else entirely: an agent that just runs, makes decisions, and does the thing.

The arc is striking when you lay it out. You used to have to go to a physical library, pick up a physical book, read it, digest it, and think about it. Today, the prevailing message to a new developer is essentially: you do not need to know any of that. Just describe what you want, and it happens.

Hidden costs for reduced conceptual depth

This shift is not just about convenience. It changes the depth of knowledge a developer carries, and that has consequences. Here is the example I keep coming back to. Imagine you ask a developer to show you a website that they built.

If you asked that in the late nineties, it meant something. To do that, you had to purchase a domain. Understand DNS well enough to wire it up correctly. Set up a web server, which meant getting Apache to actually run. Successfully configure PHP and deploy scripts to production.

By the time you could point to a working URL, you had to touch every layer of the stack. There was no other choice. Therefore, you were at least passingly familiar with a lot more than you would be today.

Ask that same question of many developers today, and the answer is a Vercel subdomain. That is not a dig at Vercel, mind you - it is a great product, and abstraction is the whole point. But some of these developers genuinely do not know what DNS is. They do not know what is running on the server versus the client. They do not know that there is even a meaningful distinction. And we have seen real security incidents come out of exactly that gap — secrets leaking into client bundles, auth logic running where it should not, and CORS misconfigurations that nobody understood well enough to notice.

Now extend that same dynamic one more step. Take the cohort of developers who will learn to program primarily through this new generation of agentic tools. The abstraction is no longer just over DNS or deployment. It is over the act of writing the code itself.

What is the role of a junior developer now?

I think we are going to end up with a genuinely different type of engineer and, as a result, a genuinely different type of system.

“If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks”.

  • Plato, Phaedrus (c. 429-347 BCE)  

Every generation has been accused of being softer than the previous generation, as the quote above can testify. In this case, Plato is decrying writing as a corrupting influence on youth who no longer bother to just remember things.

Without the attribution, I don’t think you would have realized that this isn’t me talking about developers utilizing coding agents instead of learning on their own.

In software, we see much the same pattern. The person who wrote assembly looked down on the C programmer. The C programmer looked down on the Java programmer. The Java programmer looked down on the person gluing libraries together in Python. Each step up the abstraction ladder lets people build bigger, more ambitious things with less effort. That is mostly good.

But there is a real asymmetry this time. The earlier steps abstracted away mechanical work — memory management, boilerplate, deployment plumbing. This step abstracts away the reasoning itself. And reasoning is what you need when the abstraction leaks, which it always eventually does.

The question I am actually struggling with, day to day, is much more practical: how do I evaluate a junior developer in this sort of world?

The classic move was a take-home task. Build a small feature. Show me your thinking. The problem is that a capable model will produce a perfectly clean solution to any reasonable take-home in a few minutes. What you see in the submission tells you almost nothing about what the candidate actually understands. It tells you they can prompt well, which is a real skill, but it is not the skill I am trying to measure.

I can also ask them to solve a task while they are in our offices, so I can verify no AI use. But that is also stupid; I want them to use AI. After all, that is a great productivity enhancer. So I need a way to test understanding, not just the output.

The signals I care about are the ones that are hardest to fake in an agent-assisted world. Can you debug something when the model is wrong? Can you explain why a piece of generated code is subtly unsafe, or slow, or wrong in a way that only matters at the hundredth user? Can you make a reasoned call about which abstraction to reach for and which one to reject? When the system behaves unexpectedly, do you know where to look?

At the same time, those aren’t usually qualities that you can look for in a junior developer. Having those qualities usually means that they aren’t junior anymore.

People used to train on LeetCode tests as a way to show how good they were in interviews. That was a good stand-in to see what they knew and understood. What is the next stage here?

What does a junior do to exercise their skills and show that they can bring value to the team? I don’t know if I have good answers to those questions. But that is something we, as an industry, need to consider carefully.

I do not want to be the old man yelling at the cloud. The tools are genuinely great, and refusing to use them is its own kind of malpractice. AI coding agents can make you meaningfully more productive.

But when I talk to developers just starting out, the thing I keep pushing is this: use the tools, and also, on a regular basis, go down a layer. Set up a server yourself. Deploy something without a platform holding your hand. Read the DNS records. Look at what your framework is actually generating. Write something in a language without a package manager that hides the sharp edges.

Not because you will do it that way at work. But because the next time something breaks in a way the agent cannot fix you will have a mental model to fall back on. You will know where the seams are. You will know what to look at.

That mental model is, I suspect, going to be the thing that separates the engineers who compound over a career from the ones who get stuck the first time the abstraction leaks.

time to read 5 min | 817 words

In the 2000s, the hottest move in software was offshoring. You'd ship your requirements to a development shop in India, Vietnam, or Bangladesh, pay a fraction of Western developer rates, and wait. The cost savings were real, every spreadsheet said so. The failure modes were also real, every CTO said so.

Even assuming that the teams working on your code were smart, motivated, and hardworking, the distance, communication overhead, the time zone mismatch, and misaligned incentives created a brutal set of constraints. If you wanted to get good results from offshoring,  you needed to be able to clearly specify what you wanted and be good at validating that you got what you expected.

You couldn't just say "I need a login system." You had to write detailed specs, break work into reviewable chunks, define acceptance criteria, and actually read the code that came back. Not rubber-stamp it. Read it, make sure that it passed muster and could be accepted internally, because the delta between "looks right" and "is right" could cost you six months of production incidents.

Sound familiar? Today, instead of shipping my requirements to a dev shop overseas, I'm shipping them to a GPU somewhere. I get something back. It looks like code. It might be code. It might be a very convincing facsimile of code that will quietly fail in production under load. I genuinely don't know until I sit down and read it carefully.

The same discipline that separated successful offshore engagements from expensive disasters applies here as well:

  • Specification quality determines output quality. Vague prompts return vague code. The ability to articulate exactly what you want — at the right level of abstraction — is now a core engineering skill.
  • Validation is non-negotiable. "It passed the vibe check" is not a code review. The reviewer needs to understand what the code is doing and why, not just that it compiles and the tests are green.
  • Iterative delivery beats big-bang delivery. Nobody who survived offshoring tried to outsource an entire product in one shot. You stage it. You review at each stage. You course-correct before mistakes compound.

The Bottleneck Has Moved

Here's what I think is the deeper shift: for most of software history, the bottleneck was writing the code. That took time and required expensive humans. So the industry optimized heavily around it, better editors, better frameworks, and better abstractions. All in service of making the act of writing code faster and less error-prone.

That bottleneck is collapsing. What once took six months might take six hours. When the cost of implementation approaches zero, the bottleneck moves upstream: to design, specification, and verification. The expensive parts are now:

  1. Understanding the problem clearly enough to describe it precisely.
  2. Decomposing it into well-scoped, independently verifiable pieces.
  3. Reviewing what comes back and actually understanding it.

These are skills we largely deprioritized during the era when coding itself was the hard part. They're about to become the most valuable things a technical person can do.

A lot of that used to be done “along the way” when you wrote the code. You would explore the problem and gain depth of understanding as you wrote the code. Now that just doesn’t happen, but you still need to do that work explicitly.

A note about the importance of proper architecture

There is this idea that the path to building big systems with AI is to spin up a swarm of specialized agents (a frontend agent, a backend agent, a database administrator agent, etc.) and somehow orchestrate them into a coherent product.

I find this baffling, because we already have a well-established protocol for coordinating the work of specialized, partially independent contributors on a complex system. It's called software design.

Module boundaries. Interface contracts. Separation of concerns. Dependency management. SOLID principles and more. These patterns exist precisely because complex systems built by multiple contributors without clear interfaces turn into unmaintainable messes. This is true whether those contributors are humans, offshore teams, or language models.

The instinct to throw orchestration complexity at a coordination problem is exactly backwards. The answer isn't a smarter message bus between your agents. The answer is better system design that minimizes how much the pieces need to talk to each other in the first place.

We have literally decades of experience in how to build large software systems (and thousands of years of experience in how to handle large projects in general). There isn’t anything inherently new here to deal with.

The developers who will thrive in this environment aren't necessarily the ones who write the most elegant code. They're the ones who can hold a complex system design in their head and communicate it clearly, break the work into well-specified, verifiable increments, and actually read the code that comes back and hold it to a real standard of quality.

These are, in large part, the same skills that made the best engineering leads effective during the offshoring era. The context has changed completely. The discipline hasn't.

The GPU is the new Bangalore. Time to dust off the playbook.

time to read 6 min | 1001 words

I’m convinced that in hell, there is a special place dedicated to making engineers fix flaky tests.

Not broken tests. Not tests covering a real bug. Flaky tests. Tests that pass 999 times out of 1000 and fail on the 1,000th run for no reason you can explain with a clean conscience.

If you've ever shipped a reasonably complex distributed system, you know exactly what I'm talking about. RavenDB has, at last count, over 32,000 tests that are run continuously on our CI infrastructure. I just checked, and in the past month, we’ve had hundreds of full test runs.

That is actually a problem for our scenario, because with that many tests and that many runs, the law of large numbers starts to apply. Assuming we have tests that have 99.999% reliability, that means that 1 out of every 100,000 test runs may fail. We run tens of millions of those tests in a month.

In a given week, something between ten and twenty of those tests will fail. Given the number of test runs, that is a good number in percentage terms. But each such failure means that we have to investigate it.

Those test failures are expensive. Every ticket is a developer staring at logs, trying to figure out whether this is a genuine bug in the product, a bug in the test itself, or something broken in the environment. In almost all cases, the problem is with the test itself, but we have to investigate.

A test that consistently fails is easy to fix. A test that occasionally fails is the worst.

With a flaky test, you don't just fix something and move on. You spend two days isolating it. Reproducing it. Building a mental model of a race condition that only manifests under specific timing, load, and cosmic alignment.

The tests that do this are almost always the integration tests. The ones that test complex distributed behavior across many parts of the system simultaneously. By definition, they are also the hardest to reason about.

The fact that, in most cases, those test failures add nothing to the product (i.e., they didn’t actually discover a real bug) is just crushed glass on top of the sewer smoothie. You spend a lot of time trying to find and fix the issue, and there is no real value except that the test now consistently passes.

We have a script that runs weekly, collects all test failures, and dumps them into our issue tracker. This is routine maintenance hygiene, to make sure we stay in good shape.

I was looking at the issue tracker when the script ran, and the entire screen lit up with new issues.

Just looking at that list of new annoyances was enough to ruin my mood.

And then, without much deliberate planning, I did something dumb and impulsive: I copy-pasted all of those fresh issues into Claude and told it to fix them. Then I went and did other things. I had very low expectations about this, but there was not much to lose.

A few hours later, I got a notification about a pull request. To be honest, I expected Claude to mark the flaky tests as skipped, or remove the assertions to make them pass.

I got an actual pull request, with real fixes, to my shock. Some of them were fixes applied to test logic. Some were actually fixes in the underlying code.

And then there was this one that stopped me cold. Claude had identified that in one of our test cases, we were waiting on the wrong resource. Not wrong in an obvious way — wrong in the kind of way that works perfectly 99.9998% of the time and silently fails 0.0002% of the time.

The (test) code looked right. We were waiting for something to happen; we just happened to wait on the wrong thing, and usually the value we asserted on was already set by the time we were done waiting.

Claude found it. In one pass. For the price of a subscription I was already paying. For reference, that single “let me throw Claude at it” decision probably saved enough engineering time to cover the cost of Claude for the entire team for that month.

Let me be precise about what happened and what didn't. Claude did not fix everything. Some of the "fixes" it produced were pretty bad, surface-level patches that didn't address the real cause, or things that were legitimately out of scope.

You still need an engineer reviewing the output. And you still need judgment.

But it got things fixed, quickly, without needing two days to context-switch into the problem space. And the things it did fix well, it fixed really well.

The work it compressed would have realistically taken one developer a week or two to grind through — and that's assuming you could get a developer to focus on it for that long in the first place. Flaky test investigation is the kind of work that quietly kills team morale.

Engineers start dreading CI. They start treating red builds as background noise. That's how quality degrades silently. Leaving aside new features or higher velocity, being able to offload the most annoying parts of the job to a machine to do is… wow.

Based on this, we're building this into our actual workflow as an integral part of how we handle test maintenance. Failures are collected, routed to Claude, and it takes a first pass at triage and repair. Then we create an issue in the bug tracker with either an actual fix or a summary of Claude’s findings.

By the time a human reviews this, significant progress has already been made.

It doesn't replace the engineer. But it means the engineer is doing the interesting part of the work: judgment, review, architectural reasoning. Skipping the part that requires staring at race condition logs until your vision blurs.

This isn’t the most exciting aspect of using a coding agent, I’m aware. But it may be one of the best aspects in terms of quality of life.

time to read 6 min | 1198 words

No, the title is not a mistake, nor did I use my time travel pass to give you insights from the future. Bear with me for a moment while I explain my thinking.

From individual contributor to oversight role

I started writing RavenDB in a spare bedroom, which turned into an office. The project grew from a sparkle in my head that wouldn’t let me sleep into a major project in very short order.

Today, I want to talk about a pretty important stage that happened during that growth phase. Somewhere between having five and ten full-time developers working on RavenDB, I lost the ability to keep track of every single line of code that was going into the project.

I had been the primary developer for years at this point, I wrote the majority of the code, and I was the person making all the key decisions in the project. And then, gradually, I… wasn't that guy anymore.

There were too many moving parts, too many developers, too many decisions happening in parallel for me to have my hands on all of it. That was the whole point of growing the team, dividing the tasks among the team members, and getting good people to do things so I didn’t have to do it all myself.

What I didn't expect was how much it would bother me. Moving from being the primary developer to a supervisory role didn’t mean that I lost the ability to write code. In fact, in many cases, I could “see” what the solution for each issue should be.

I just didn’t have the time to do that, nor the capacity to sit with every single developer on every single issue and craft the right way to solve it. I'd hand a feature to a developer knowing that the way they were going to handle it would not be mine.

That doesn’t mean it would be wrong, but it wouldn’t be the same. It might need a review cycle or two to get to the right level for the product, or they wouldn’t consider how it fits into the grand scheme of things, etc.

And let’s not talk about the time estimates I got. I’m willing to assume that my personal timing estimates are highly subjective and influenced by my deep familiarity with the codebase.

But still. Multiple days for something that felt like it should be a two-hour job was hard to sit with.

I carried around a background level of frustration for quite some time. It killed me that the pace of development wasn’t up to what I wanted it to be. “If I could just have the time to sit and write this”, I kept thinking, “we would be done by the end of the week.”

There was progress, to be clear, but nothing was moving fast enough. Everywhere I looked, we had stalled.

And then something happened. It didn’t happen all at once, but in the space of a month or two, features started to land. Each team had been heads-down on something for quite a while, and by some coincidence of timing, they all finished around the same time.

Suddenly, we moved from “we have nothing to ship” to “we can’t have so many new features all at once”. I realized that I would be able to ship things faster, for sure. I could do two new features, maybe even three, in that same time frame. That would require head-down coding for the entire duration, of course.

Reading that last paragraph again, I have to admit that I may be letting some hubris color my perception 🤷😏.

I wouldn’t be able to deliver the sheer quantity of features that the team was able to deliver.

What had felt like months of stagnation turned out to be parallelism in action.

Yes, some of the code wasn't the same code that I would write. And some of the architectural decisions weren't the ones I'd have made. That didn’t make them wrong, mind. And those developers were working on things I was not working on. And the sum total of what got built was something I could never have done solo.

Treating coding agents as junior developers?

I think about that experience constantly now, because I'm living a version of it again, except the new team member is Claude. Working with AI coding agents today feels remarkably like working with a junior developer who is also a savant.

They've read everything. They know an enormous amount. They can produce working code quickly and confidently across a staggering range of domains. And yet they're also genuinely ignorant in ways that will surprise you: missing context, misreading intent, optimizing for the wrong thing, occasionally producing something that is confidently and completely broken.

This is not a criticism. This is just what it's like. And I've dealt with this before. There are clear parallels between mentoring junior engineers and looking at the output from an AI agent.

There is an assumption that you need to get perfect output from a coding agent. But you are not likely to get perfect output from a human developer. Even experienced developers benefit greatly from reviews, guidance, etc. Junior developers need more of that, of course, but they can still bring value, even if their output goes through several iterations.

For coding agents to bring real value, you need to consider them in the same light.

The shift that happened with my developer team is the same shift that's happening now with AI agents.

Instead of writing every line yourself, you start spending time on the bigger picture: here's the overall direction, here's the architectural constraint, here's what done looks like. Then you review the outputs.

Talking to a coding agent is a little different from discussing a feature with a dev and reviewing their code days later, except that the agent delivers the output in the time it takes to get coffee.

The fact that this cycle is done in a short amount of time means that you still have all the knowledge in your head. You can catch drift before it becomes technical debt.

The cost of going in the wrong direction is greatly reduced, which means that you can be far more radical about how you approach these tasks.

Unnatural impulses as a developer

I wonder if a lot of developers are facing challenges in this area specifically because they don’t have the managerial experience needed for this new aspect of the work.

I have been writing code with Claude recently. And the short feedback cycle means that I’m loving it. I'm not abdicating the technical judgment, mind. I'm applying it differently.

I'm writing the high-level design, not the implementation. I'm doing the review, not the first draft. And I'm being honest with myself that the output, while it isn’t always what I would write, is covering ground I simply would not have covered otherwise.

I have been doing this for a long time and it feels quite natural. I also remember that this was a difficult transition for me at the time.

For those who want to better understand how they can get the most value from coding agents, you are probably better off looking into project management theory rather than optimizing your agents.md file.

time to read 4 min | 650 words

One of our team leads has been working on a major feature using Claude Code. He's been at it for a few days and is nearly done. To put that in context: this feature would normally represent about a month of a senior developer's time.

He did the backend work himself — working with Claude to build it out, applying his knowledge of how the system should behave, reviewing, adjusting, and iterating. He handled only the backend, and when I asked him about the frontend, he said: "I'm going to let Matt’s Claude handle that."

Context: Matt is the frontend team lead.

Note the interesting phrasing. He didn't say "I'll do the UI later" or "Claude’ll handle the UI." He deferred to the frontend lead who has the domain expertise to drive that part.

That's not a throwaway comment. That's an important statement about how work should be divided in the age of AI agents.

Here's the thing: I've told Claude to build a UI for a feature, pointed it at the codebase, and it figured out how the frontend is structured, what patterns we use, and generated something I could work with. It wasn’t a sketch or a wireframe diagram, it was actually usable.

I got a functional UI from Claude in less time than it would take to write up the issue describing what I want.

That UI was enough for me to explore the feature, do a small demo, etc. I’m not a frontend guy, and I didn’t even look at the code, but I assume that the output probably matched the rest of our frontend code.

We won’t be using the UI Claude generated for me, though. The gap in polish between what I got and what a real frontend developer produces is enormous. I got something I could play with, but it was very evident that it wasn’t something that had received real attention.

For the time being, it was more than sufficient. The problem is that even leaning heavily on AI, the investment of time for me to do it right would be significant. I'd need to understand our frontend architecture, our conventions, our component library, how state flows, and what our designers expect. All of that would take real time, even with an AI doing most of the code generation.

That is leaving aside the things that I don’t know about frontend that I wouldn’t even realize I need to handle. I wouldn’t even know what to ask the AI about, even if it could do the right thing if I sent it the right prompt.

Contrast that with the frontend team. They know the architecture of the frontend, of course, and they know how things should slot together and what concerns they should address. They know when Claude's suggestion is on the right track and when it's going to create a mess three layers down. Effectively, they know the magic incantation that the agent needs in order to do the right thing.

What does this say about AI usage in general? Given two people with the same access to a smart coding agent like Claude or Codex, both performing the same task, their domain knowledge will lead to very different results. In other words, it means that Claude and its equivalents are tools. And the wielder of the tool has a huge impact on the end result.

The role of expertise hasn't diminished. It's shifted. The expert is no longer the person who can produce the artifact. They're the person who can direct the production of the artifact correctly and efficiently. That's a different skill profile, but it's no less valuable and the leverage is higher.

We're still figuring out what this means structurally. But the instinct to say "that's not my domain, let the person who knows it handle the AI that does it" is correct. Domain knowledge determines the quality of the output, even when the AI is doing all the typing.

time to read 9 min | 1674 words

I am working a bit with sparse files, and I need to output the list of holes in my file.

To my great surprise, I found that my file had more holes than I put into it. This probably deserves a bit of explanation.

If you know what sparse files are, feel free to skip this explanation:

A sparse filereduces disk space usage by storing only the non-zero data blocks.Zero-filled regions ("holes") are recorded as file system metadata only.

The file still has the same “size”, but we don’t need to dedicate actual disk space for ranges that are filled with zeros, we can just remember that there are zeros there. This is a natural consequence of the fact that files aren’t actually composed of linear space on disk.

Filesystems grow files using extents (contiguous disk chunks).A file initially gets a single extent (e.g., 1MB).Fast I/O is maintained as sequential data fills this contiguous block.Once the extent is full, the filesystem allocates a new, separate extent (which will not reside next to the previous one, most likely).The file's logical size grows continuously, but physical allocation occurs in discrete bursts as new extents are dynamically added.

If you are old enough to remember running defrag, that was essentially what it did. Ensured that the whole file was a single continuous allocation on disk. Because of this, it is very simple for a file system to just record holes, and the only file system that you’ll find in common use today that doesn’t support it is FAT.

At any rate, I had a problem. My file has more holes than expected, and that is not a good thing. This is the sort of thing that calls for a “Stop, investigate, blog” reaction. Hence, this post.

Let’s see a small example that demonstrates this:


#define _GNU_SOURCE
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>


int main()
{
    const off_t file_size = 1024LL * 1024 * 1024;
    int fd = open("test-sparse-file.dat", O_CREAT | O_RDWR | O_TRUNC, 0644);
    fallocate(fd, 0, 0, file_size);
    
    off_t offset = 0;
    while (offset < file_size) {
        off_t hole_start = lseek(fd, offset, SEEK_HOLE);
        if (hole_start >= file_size) break;
        
        off_t hole_end = lseek(fd, hole_start, SEEK_DATA);
        if (hole_end < 0) hole_end = file_size;
        
        printf("Start: %.2f MB, End: %.2f MB\n", 
               hole_start / (1024.0 * 1024.0),
               hole_end / (1024.0 * 1024.0));
        
        offset = hole_end;
    }
    
    close(fd);
    return 0;
}

If you run this code, you’ll see this surprising result:


Start: 0.00 MB, End: 1024.00 MB

In other words, even though we just use fallocate() to ensure that we reserved the disk space, as far as lseek() is concerned, it is just one big hole. What is going on here?

Let’s dig a little deeper, using filefrag:


$ filefrag -b1048576 -v test-sparse-file.dat 
Filesystem type is: ef53
File size of test-sparse-file.dat is 1073741824 (1024 blocks of 1048576 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..      23:     165608..    165631:     24:             unwritten
   1:       24..     151:     165376..    165503:    128:     165632: unwritten
   2:      152..     279:     165248..    165375:    128:     165504: unwritten
   3:      280..     407:     165120..    165247:    128:     165376: unwritten
   4:      408..     535:     164992..    165119:    128:     165248: unwritten
   5:      536..     663:     164864..    164991:    128:     165120: unwritten
   6:      664..     791:     164736..    164863:    128:     164992: unwritten
   7:      792..     919:     164608..    164735:    128:     164864: unwritten
   8:      920..    1023:     164480..    164583:    104:     164736: last,unwritten,eof
test-sparse-file.dat: 9 extents found

You can see that the file is made of 9 separate extents. The first one is 24MB in size, then 7 extents that are 128MB each, and the final one is 104MB.

Amusingly enough, the physical layout of the file is in reverse order to the logical layout of the file. That is just the allocation pattern of the file system, since there is no relation between the two.

Now, let’s try to figure out what is going on here. Do you see the flags on those extents? It says unwritten. That means this is physical space that was allocated to the file, but the file system is aware that it never wrote to that space. Therefore, that space must be zero.

In other words, conceptually, this unwritten space is no different from a sparse region in the file. In both cases, the file system can just hand me a block of zeros when I try to access it.

The question is, why is the file system behaving in this manner? And the answer is that this is an optimization. Instead of reading the data (which we know to be zeros) from the disk, we can just hand it over to the application directly. That saves on I/O, which is quite nice.

Consider the typical scenario of allocating a file and then writing to it. Without this optimization, we would literally double the amount of I/O  we have to do.

It turns out that this optimization also applies to Windows and Mac, but the reason I ran into that on Linux is that I used the lseek(SEEK_HOLE), which considers the unwritten portion as a sparse hole as well. This makes sense, since if I want to copy data and I am aware of sparse regions, I should treat the unwritten portions as holes as well.

You can use the ioctl(FS_IOC_FIEMAP) to inspect the actual file extents (this is what filefrag does) if you actually care about the difference.

time to read 1 min | 172 words

I needed to export all the messages from one of our Slack channels. Slack has a way of exporting everything, but nothing that could easily just give me all the messages in a single channel.

There are tools like slackdump or Slack apps that I could use, and I tried, but I got lost trying to make it work. In frustration, I opened VS Code and wrote:

I want a simple node.js that accepts a channel name from Slack and export all the messages in the channel to a CSV file

The output was a single script and instructions on how I should register to get the right token. It literally took me less time to ask for the script than to try to figure out how to use the “proper” tools for this.

The ability to do these sorts of one-off things is exhilarating.

Keep in mind: this isn’t generally applicable if you need something that would actually work over time. See my other post for details on that.

time to read 4 min | 710 words

I was reviewing some code, and I ran into the following snippet. Take a look at it:


public void AddAttachment(string fileName, Stream stream)
   {
       ValidationMethods.AssertNotNullOrEmpty(fileName, nameof(fileName));
       if (stream == null)
           throw new ArgumentNullException(nameof(stream));


       string type = GetContentType(fileName);


       _attachments.Add(new PutAttachmentCommandData("__this__", fileName, stream, type, changeVector: string.Empty));
   }


   private static string GetContentType(string fileName)
   {
       var extension = Path.GetExtension(fileName);
       if (string.IsNullOrEmpty(extension))
           return "image/jpeg"; // Default fallback


       return extension.ToLowerInvariant() switch
       {
           ".jpg" or ".jpeg" => "image/jpeg",
           ".png" => "image/png",
           ".webp" => "image/webp",
           ".gif" => "image/gif",
           ".pdf" => "application/pdf",
           ".txt" => "text/plain",
           _ => "application/octet-stream"
       };
   }

I don’t like this code because the API is trying to guess the intent of the caller. We are making some reasonable inferences here, for sure, but we are also ensuring that any future progress will require us to change our code, instead of letting the caller do that.

In fact, the caller probably knows a lot more than we do about what is going on. They know if they are uploading an image, and probably in what format too. They know that they just uploaded a CSV file (and that we need to classify it as plain text, etc.).

This is one of those cases where the best option is not to try to be smart. I recommended that we write the function to let the caller deal with it.

It is important to note that this is meant to be a public API in a library that is shipped to external customers, so changing something in the library is not easy (change, release, deploy, update - that can take a while). We need to make sure that we aren’t blocking the caller from doing things they may want to.

This is a case of trying to help the user, but instead ending up crippling what they can do with the API.

time to read 4 min | 716 words

You may have heard about a recent security vulnerability in MongoDB (MongoBleed). The gist is that you can (as an unauthenticated user) remotely read the contents of MongoDB’s memory (including things like secrets, document data, and PII). You can read the details about the actual technical issue in the link above.

The root cause of the problem is that the authentication process for MongoDB uses MongoDB’s own code. That sounds like a very strange statement, no? Consider the layer at which authentication happens. MongoDB handles authentication at the application level.

Let me skip ahead a bit to talk about how RavenDB handles the problem of authentication. We thought long and hard about that problem when we redesigned RavenDB for the 4.0 release. One of the key design decisions we made was to not handle authentication ourselves.

Authentication in RavenDB is based on X.509 certificates. That is usually the highest level of security you’re asked for by enterprises anyway, so RavenDB’s minimum security level is already at the high end. That decision, however, had a lot of other implications.

RavenDB doesn’t have any code to actually authenticate a user. Instead, authentication happens at the infrastructure layer, before any application-level code runs. That means that at a very fundamental level, we don’t deal with unauthenticated input. That is rejected very early in the process.

It isn’t a theoretical issue, by the way. A recent CVE was released for .NET-based applications (of which RavenDB is one) that could lead to exactly this issue, an authentication bypass problem.RavenDB is not vulnerable as a result of this issue because the authentication mechanism it relies on is much lower in the stack.

By the same token, the code that actually performs the authentication for RavenDB is the same code that validates that your connection to your bank is secure from hackers. On Linux - OpenSSL, on Windows - SChannel. These are already very carefully scrutinized and security-critical infrastructure for pretty much everyone.

This design decision also leads to an interesting division inside RavenDB. There is a very strict separation between authentication-related code (provided by the platform) and RavenDB’s.

The problem for MongoDB is that they reused the same code for reading BSON documents from the network as part of their authentication mechanism.

That means that any aspect of BSON in MongoDB needs to be analyzed with an eye toward unauthenticated user input, as this CVE shows.

An attempt to add compression support to reduce network traffic resulted in size confusion, which then led to this problem. To be clear, that is a very reasonable set of steps that happened. For RavenDB, something similar is plausible, but not for unauthorized users.

What about Heartbleed?        

The name Mongobleed is an intentional reference to a very similar bug in OpenSSL from over a decade ago, with similar disastrous consequences. Wouldn’t RavenDB then be vulnerable in the same manner as MongoDB?

That is where the choice to use the platform infrastructure comes to our aid. Yes, in such a scenario, RavenDB would be vulnerable. But so would pretty much everything else. For example, MongoDB itself, even though it isn’t using OpenSSL for authentication, would also be vulnerable to such a bug in OpenSSL.

The good thing about OpenSSL’s Heartbleed bug is that it shined a huge spotlight on such bugs, and it means that a lot of time, money, and effort has been dedicated to rooting out similar issues, to the point where trust in OpenSSL has been restored.

Summary

One of the key decisions that we made when we built RavenDB was to look at how we could use the underlying (battle-tested) infrastructure to do things for us.

For security purposes, that means we have reduced the risk of vulnerabilities. A bug in RavenDB code isn’t a security vulnerability, you have to target the (much more closely scrutinized) infrastructure to actually get to a vulnerable state. That is part of our Zero Trust policy.

RavenDB has a far simpler security footprint, we use the enterprise-level TLS & X.509 for authentication instead of implementing six different protocols (and carrying the liability of each). This both simplifies the process of setting up RavenDB securely and reduces the effort required to achieve proper security compliance.

You cannot underestimate the power of checking the “X.509 client authentication” box and dropping whole sections of the security audit when deploying a new system.

time to read 13 min | 2490 words

In the previous post, I talked about the PropertySphere Telegram bot (you can also watch the full video here). In this post, I want to show how we can make it even smarter. Take a look at the following chat screenshot:

What is actually going on here? This small interaction showcases a number of RavenDB features, all at once. Let’s first focus on how Telegram hands us images. This is done using Photoor Document messages (depending on exactly how you send the message to Telegram).

The following code shows how we receive and store a photo from Telegram:


// Download the largest version of the photo from Telegram:
var ms = new MemoryStream();
var fileId = message.Photo.MaxBy(ps => ps.FileSize).FileId;
var file = await botClient.GetInfoAndDownloadFile(fileId, ms, cancellationToken);

// Create a Photo document to store metadata:
var photo = new Photo
{
    ConversationId = GetConversationId(chatId),
    Id = "photos/" + Guid.NewGuid().ToString("N"),
    RenterId = renter.Id,
    Caption = message.Caption ?? message.Text
};

// Store the image as an attachment on the document:
await session.StoreAsync(photo, cancellationToken);
ms.Position = 0;
session.Advanced.Attachments.Store(photo, "image.jpg", ms);
await session.SaveChangesAsync(cancellationToken);

// Notify the user that we're processing the image:
await botClient.SendMessage(
chatId,
       "Looking at the photo you sent..., may take me a moment...",
       cancellationToken
);

A Photo message in Telegram may contain multiple versions of the image in various resolutions. Here I’m simply selecting the best one by file size, downloading the image from Telegram’s servers to a memory stream, then I create a Photo document and add the image stream to it as an attachment.

We also tell the client to wait while we process the image, but there is no further code that does anything with it.

Gen AI & Attachment processing

We use a Gen AI task to actually process the image, handling it in the background since it may take a while and we want to keep the chat with the user open. That said, if you look at the actual screenshots, the entire conversation took under a minute.

Here is the actual Gen AI task definition for processing these photos:


var genAiTask = new GenAiConfiguration
{
    Name = "Image Description Generator",
    Identifier = TaskIdentifier,
    Collection = "Photos",
    Prompt = """
        You are an AI Assistant looking at photos from renters in 
        rental property management, usually about some issue they have. 
        Your task is to generate a concise and accurate description of what 
        is depicted in the photo provided, so maintenance can help them.
        """,


    // Expected structure of the model's response:
    SampleObject = """
        {
            "Description": "Description of the image"
        }
        """,


    // Apply the generated description to the document:
    UpdateScript = "this.Description = $output.Description;",


    // Pass the caption and image to the model for processing:
    GenAiTransformation = new GenAiTransformation
    {
        Script = """
            ai.genContext({
                Caption: this.Caption
            }).withJpeg(loadAttachment("image.jpg"));
            """
    },
    ConnectionStringName = "Property Management AI Model"
};

What we are doing here is asking RavenDB to send the caption and image contents from each document in the Photos collection to the AI model, along with the given prompt. Then we ask it to explain in detail what is in the picture.

Here is an example of the results of this task after it completed. For reference, here is the full description of the image from the model:

A leaking metal pipe under a sink is dripping water into a bucket. There is water and stains on the wooden surface beneath the pipe, indicating ongoing leakage and potential water damage.

What model is required for this?

I’m using the gpt-4.1-mini model here; there is no need for anything beyond that. It is a multimodal model capable of handling both text and images, so it works great for our needs.

You can read more about processing attachments with RavenDB’s Gen AI here.

We still need to close the loop, of course. The Gen AI task that processes the images is actually running in the background. How do we get the output of that from the database and into the chat?

To process that, we create a RavenDB Subscription to the Photos collection, which looks like this:


store.Subscriptions.Create(new SubscriptionCreationOptions
{
    Name = SubscriptionName,
    Query = """
        from "Photos" 
        where Description != null
        """
});

This subscription is called by RavenDB whenever a document in the Photos collection is created or updated with the Description having a value. In other words, this will be triggered when the GenAI task updates the photo after it runs.

The actual handling of the subscription is done using the following code:


_documentStore.Subscriptions.GetSubscriptionWorker<Photo>("After Photos Analysis")
    .Run(async batch =>
    {
        using var session = batch.OpenAsyncSession();
        foreach (var item in batch.Items)
        {
            var renter = await session.LoadAsync<Renter>(
item.Result.RenterId!);
            await ProcessMessageAsync(_botClient, renter.TelegramChatId!,
                $"Uploaded an image with caption: {item.Result.Caption}\r\n" +
                $"Image description: {item.Result.Description}.",
                cancellationToken);
        }
    });

In other words, we run over the items in the subscription batch, and for each one, we emit a “fake” message as if it were sent by the user to the Telegram bot. Note that we aren’t invoking the RavenDB conversation directly, but instead reusing the Telegram message handling logic. This way, the reply from the model will go directly back into the users' chat.

You can see how that works in the screenshot above. It looks like the model looked at the image, and then it acted. In this case, it acted by creating a service request. We previously looked at charging a credit card, and now let’s see how we handle creating a service request by the model.

The AI Agent is defined with a CreateServiceRequest action, which looks like this:


Actions = [
    new AiAgentToolAction
    {
        Name = "CreateServiceRequest",
        Description = "Create a new service request for the renter's unit",
        ParametersSampleObject = JsonConvert.SerializeObject(
            new CreateServiceRequestArgs
            {
                    Type =         """
Maintenance | Repair | Plumbing | Electrical | 
HVAC | Appliance | Community | Neighbors | Other
""",
            Description =         """
Detailed description of the issue with all 
relevant context
"""
                })
    },
]

As a reminder, this is the description of the action that the model can invoke. Its actual handling is done when we create the conversation, like so:


conversation.Handle<PropertyAgent.CreateServiceRequestArgs>(
"CreateServiceRequest", 
async args =>
{
    using var session = _documentStore.OpenAsyncSession();
    var unitId = renterUnits.FirstOrDefault();
    var propertyId = unitId?.Substring(0, unitId.LastIndexOf('/'));


    var serviceRequest = new ServiceRequest
    {
        RenterId = renter.Id!,
        UnitId = unitId,
        Type = args.Type,
        Description = args.Description,
        Status = "Open",
        OpenedAt = DateTime.UtcNow,
        PropertyId = propertyId
    };


    await session.StoreAsync(serviceRequest);
    await session.SaveChangesAsync();


    return $"Service request created ID `{serviceRequest.Id}` for your unit.";
});

In this case, there isn’t really much to do here, but hopefully this conveys the kind of code this allows you to write.

Summary

The PropertySphere sample application and its Telegram bot are interesting, mostly because of everything that isn’t here. We have a bot that has a pretty complex set of behaviors, but there isn’t a lot of complexity for us to deal with.

This behavior is emergent from the capabilities we entrusted to the model, and the kind of capabilities we give it. At the same time, I’m not trusting the model, but verifying that what it does is always within the scope of the user’s capabilities.

Extending what we have here to allow additional capabilities is easy. Consider adding the ability to get invoices directly from the Telegram interface, a great exercise in extending what you can do with the sample app.

There is also the full video where I walk you through all aspects of the sample application, and as always, we’d love to talk to you on Discord or in our GitHub discussions.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. API Design (10):
    29 Jan 2026 - Don't try to guess
  2. Recording (20):
    05 Dec 2025 - Build AI that understands your business
  3. Webinar (8):
    16 Sep 2025 - Building AI Agents in RavenDB
  4. RavenDB 7.1 (7):
    11 Jul 2025 - The Gen AI release
  5. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
View all series

Syndication

Main feed ... ...
Comments feed   ... ...