Black boxes

11/14/20255 min read

In the first session of the sixth edition of Instituto Tramontana's product management program, we tackled something that often gets overlooked: the nature of software as information. It's uncomfortable terrain. It forces you to move with uncertainty, to accept that there are always things we cannot see, and to develop ways of thinking that help us fail less.

Without correctly understanding the determining role of software, it's easy to fall into common mistakes that burden all our activity. I've written before about why software-based products are so difficult: information is not knowledge, there's always invisible information, information is not flat, and it's related to contexts that are constantly changing.

Participants taking notes during the session

What we cannot see

Software-based products have many similarities with what in other fields is known as "black boxes." In psychology, the black box is a metaphor to designate what lies between input and output, between stimulus and response. It's everything that is not observable, or at least not in a habitual or inexpensive way.

We can bring this idea to our terrain. We too build products that are, in a sense, sets of stimuli and responses. We could learn about them through the signals they emit. But much remains hidden.

Digital product thinking, by relying on this terrain, must favor systemic connections. Both at the operational level—diagrams, flows, graphic representations—and at the expository level: arguments that attempt to relate the parts to explain the whole.

The emergence of LLMs and generative artificial intelligence only reinforces this perspective. We're facing the black box taken to the extreme: not even those who build these systems can precisely explain what happens between input and output. The model learns patterns that escape direct inspection. Opacity is no longer a practical limitation—we don't have time or resources to observe—but constitutive. There's no way to open that box.

This forces us to rethink many things. How we evaluate what works. How we establish trust. How we coexist with systems that produce useful results without being able to trace the path that generates them. The skills we develop to work with precarious information—representing problems, making assumptions explicit, resisting the temptation of premature certainties—become more necessary than ever.

Two schemas for thinking

There are two systemic schemas that dominate decisions about digital products.

On one hand, the input → output schema. It's worth always having it internalized, to apply it to any situation in which you're making a decision. In software it's especially rich because the input is usually where your activity is and the output is a terrain that encompasses experience in its broadest sense. Thinking in terms of data types becomes a decisive factor.

It's highly recommended to turn to everything surrounding the Unix philosophy to understand the power of this schema. Do one thing and do it well. Compose small pieces. Prefer plain text.

On the other hand, the client ↔ server schema. Its implications propagate throughout the network, and the nature of the different pieces participating at each end determines many of our decisions. It's advisable to pay attention to the movements around APIs to understand the power of connections. Understanding key pieces, like the browser, becomes determinant.

Things in digital get interesting when the ends of both schemas become confused and in the connection of one thing to another, roles are exchanged.

Program session

Not fooling ourselves too much

Always with the prudence of not applying them blindly, there are some rules that prove useful when facing information problems:

  • Optimization rule: first prototype, then polish; don't optimize anything before having it working.
  • Clarity rule: clarity is always preferable to its absence.
  • Simplicity rule: start with the simple, add complexity only when necessary.
  • Silence rule: if you have nothing to say, be quiet.

The first step to failing less is to look at problems as a set of possible alternatives. Never as a set of a single element. Represent the problem space, make our assumptions explicit, resist the temptation to have the problem already solved before starting.

The limitations we work with force us to always work with compromises and sacrifices. Accept them, list them, agree on them with the team. And form teams whose size corresponds to participation, with enough autonomy to move solutions forward, and with constant communication to reduce the deltas of changing information.

The habit is not to fool ourselves too much. Living with precarious information.


Session practice

Case: AWS and our knowledge of systems. We analyzed in pairs the DynamoDB incident in the US-EAST-1 region from October 2025. The exercise consisted of navigating sources (AWS communications, news, technical forums), identifying the layers of the problem, and writing a plain explanation of the failure along with a diagram of the circuit of pieces involved. An exercise in articulating opinions from precarious information.

In-session exercise: Signals from customer support. From real tickets about download problems, classify problem components in a matrix of opposites: visible vs. not visible, information vs. knowledge, dynamic vs. static, flat vs. multi-layered.

Homework: Two exercises. Anecdote hunting: capture everyday situations that illustrate the session topics. How does it work?: choose a piece of a digital product and document its internals—how you imagine it works, analyzed through the input → output and client ↔ server models.