We Are Developers 2024 in Berlin

After a long break due to the pandemic, I finally got the chance to attend a conference again! This time I went to Berlin for a legendary event that attracts developers from all corners of the world. Thanks to DeepL (the best translation company ever) and my individual learning budget, I was able to be part of this exciting experience.

💡
If you want to skip the non conference part, click here

Table of Contents

Tuesday — Travel and arriving

My workday went mobile! I started remote at home, continued at the train station and finally on the ICE going to Berlin. To my very surprise, the train was not just on time, but even 1 minute early. The first time an ICE I took was on time. Quite impressive!

Definitely a marathon, not a sprint to get there. As it was years ago, when I was the last time in Berlin, I obviously had to revisit a few tourist attractions and take a little tour around the city. Since an evening is clearly not enough for sightseeing, I had to prioritize and ended up with two classics that I want to highlight and write about here.

Brandenburg Gate

It is a symbol of both division and unity, Berlin's Brandenburg Gate is a must-see landmark steeped in history. It is an iconic 18th-century monument that offers a glimpse into Prussia's past and serves as a powerful reminder of Germany's reunification.

The iconic East Side Gallery isn't your typical museum. It's a good place to get your dose of history and art fix all at once, with powerful murals splashed across the longest remaining stretch of the Berlin Wall. I was hyped to take some pictures, pondering the past, and immersing myself in the vibrant street art scene.

Wednesday — Event with Docker

After a few hours of working from the hotel in the morning, it was time for the first part of my conference experience. WORKSHOPS, NETWORKING AND DRINKS by Docker at the City Cube at Messe Berlin. This also gave me a great opportunity to skip the line on Thursday and start relaxed into the first day at the conference.

The workshop part was a bit confusing for me — and for many others I met. I assumed they were something hosted by Docker. Instead, this referred to the workshops by the conference. These were already rapidly fully booked after the registration started. Unfortunately, neither me nor any of the folks I talked to could register for one. Maybe next time.

Networking and my impressions

A bit disappointed by this experience at first, but I fortunately had a lot of time to talk to a ton of people about tech, what they are working on and what their tech stacks are. Luckily for me, there was a bit of everything! From a one-man-army to medium-sized corporations and hidden champions.

To my surprise, there were plenty of people working on huge PHP codebases. With the versions in the lead being 7 and 8, rarely 5. For everyone not aware, these are the most modern (and also most performant) versions of PHP. When it comes to deployment (unsurprisingly) a lot has changed since my time in the field — more Kubernetes, AWS ECS and less classical VMs and almost no external Web hosting-Providers.

When it comes to monitoring and tracing, most of the folks I talked to have only the absolute minimum of monitoring. Barely enough, and sometimes mostly manual. And we are not even talking about tracing yet, which was the exception from what I've heard.

Static Code Analysis also does not seem to be the highest priority. With “just” linters for style and formatting in place in most cases. This appears to be already sufficient for most developers, while focusing more on it in code reviews. This is something I find rather time-consuming and error-prone to check manually.

Drag Karaoke to finish the day

After tons of networking, technical exchange and a quick lunch break, it was already 18:00h, which meant it was time for Drag Karaoke with Gieza Poke. A really lovely experience! Two hours went by like nothing.

Thursday — Conference Part I

The first day of the conference was at first quite chaotic and overwhelming for me. After getting used to the floor plan, it got significantly easier over time. Sadly, there was only about 10 minutes between each talk and I “stage-hopped” quite a lot as I had plenty of different fields of topics I was interested in, which were quite split across the venue. That meant a lot of struggle trying to make it through the masses to the next stage in time.

You’re a great coder? That alone won’t get you far: The soft skills secret in IT success.

Pierluigi Meloni started with some statistics based on an except of hiring statistics. Pointing out that coding is too common as a skill to be distinguished when applying for a job or getting a promotion. At the same time, you won't get far without it, with most recruiters canceling basically on the first call when the technical base is missing.

Instead, it is soft skills that make the difference, having the business in mind when coding and making decisions. According to studies, we spend 53% of our time as developers in meetings, which, I think, depending on the week, is even more.

Another shocking discovery was that 70% of IT projects fail, while the definition of fail is either not in time, budget or both. This indicates something horrible going on in our industry. The most common causes mentioned for these failures seem to be process issues or miscommunication. Both implying soft skills were the real issue.

What helps to gain more visibility is becoming the go-to-person for a given technology or area of expertise. It is not relevant to be the best, but the one people think about when working on it and search for advice.

In a great analogy, he compared the job of us IT folks with doctors or mechanics. You usually don't tell them how to do their job because they are the experts. Contrary to IT, we often let customers or management tell us what to do or “skip tests, to save time”. Clearly stating the message, we need to be more protective of our expertise and fight for what's right. At the end of the day, we get paid to be these experts getting it done.

Keeping in mind, we spend most of the time in meetings, talking to Stakeholders or even other developers, communication is key. We need to learn to become better communicators, giving hooks in conversations for people to hop onto a conversation or explanation. Tailor the contents more for the audience. When talking to other audiences, rephrase units like code, components, and modules to their domain. For example, for marketing its benefits, competitor analysis and conversion. While management usually decides, whatever is given to them. Instead of trying to explain tech, try to give them scenarios with advantages or disadvantages the technical discussion would result in.

Architecture Antipattern

This talk was quite disappointing and boils down to https://architecture-antipatterns.tech/.

For me, the only interesting part was the “emotional attachment” antipattern. Where people are basically pre-occupied and determined to just apply their architecture, no matter the facts. Something I saw fairly regularly in a professional setting without knowing what it was.

💡
I created a Blog post about a similar topic to emotional attachment a while ago, titled That's the way we've always done it, or how shit sticks for ages.

Modulith Instead of Monolith — Pragmatically Towards Microservices

This talk by Zeiss focused on a business application used by different product lines. Starting off with a monolithic desktop application, which had no clear structures and boundaries defined after decades of continuous development.

What came to mind when I first saw the visualization of dependencies is that this is quite valuable. Visualize and make it visible first to also be able to track progress and see what the biggest issues really are at a glance.

In general, the idea was pretty simple, yet powerful. Split the Monolith into clearly separated and isolated modules. These modules are self-contained, with business logic, UI and a well-defined interface exposing functionality to other modules.

To put it all together, they introduced another module, the kernel, which basically binds and merges the entry points of the modules into a usable application.

Refactoring time was mentioned at five years. Quite an impressive amount. To not be in maintenance mode for years, they came up with a clever idea. Putting the monolith in “a box”. This meant basically stuffing the monolith into a giant module and handling it like the newer and cleaner modules. The monolith can use new functionality, which also makes refactoring a lot easier and more incremental. Obviously, using legacy functionality in new modules is not allowed. While this pattern was not entirely new to me, the integration and smoothness was quite astounding to me.

While in a microservice architecture, at some point one needs to settle for synchronous or asynchronous communication via queues, message bus etc. they implemented an “in-process message bus”. Since all the functionality is in-process, the overhead is really low and there is no unreliability due to network etc.

A good mixture of both a modular code base with the benefit of having a single deployable unit for customers. Easy to test and deliver while providing ease of common standards and basically locally shared libraries, that avoid version conflicts in the first place.

As a bonus, Hendrik Lösch also mentioned the architecture they are using for their machines delivering to customers, which evolve and are similar but also quite different in certain aspects. That's why they utilize Product Line Engineering, which I never heard of before. Quite an intriguing concept, I will take a closer look at.

Kubernetes Maestro: Dive Deep into Custom Resources to Unleash Next-Level Orchestration Power!

Custom Resources are quite a mighty concept, used by many k8s-native tooling that I also use every day. I was not aware how comparatively easy it is to write new functionality to extend k8s.

The actual code that needs to be written to make it work is relatively low, as most code can be autogenerated by Kubernetes code-generator.

It boils down to four relative simple steps:

  1. Design how the specification should look like from a user perspective
  2. Define the schema using Open API, with a built-in k8s resource
  3. Generate boilerplate code
  4. Implement the handler to handle CRUD operations for the resource to sync with the current state of infrastructure

While Ume Habiba excluded the deployment part, I checked the k8s docs later on. It is a simple as deploying a regular service to the cluster, basically building a container image with the built operator.

I'll definitely give it a shot and already have some neat use cases for it in mind.

Everything as code

This talk was very generic, just mentioned everything could be written as code, mentioning it as a list. While I see the appeal of it and rely on everything that's possible as code, this did not bring any value for me.

Break the Chain: Decentralized solutions for today’s Web2.0 privacy problems

A quite impressive amount of 4% of global turnover is spent on GDPR fines of companies violating data protection law. While 60% of smaller companies need to even close their business after a data breach within only 6 months.

After some very overwhelming numbers on how data breaches can put a lot on the line, the idea was simple: What if we move the data back to the user? Nothing (user-related) to store, nothing to steal, sounds simple enough.

The CTO of Affinidi Adam Larter went briefly over the existing standards from W3C:

In combination with OIDC for Verifiable Presentations, it enables the user to take control over their data again.

How the chain of trust works

The very neat thing about this is the idea that it works like a passport. In real life, the government (issuer) gives you a passport (verifiable credential), which, for example, a store (verifier) checks when you buy alcohol.

In the scenario of digital world, an online shop could request only information like the date of birth and maybe the address from the user directly. This can be used to verify if the shipping address matches, and if the customer is allowed to buy alcohol according to the law in their country. All without having to store this information in their system, thanks to asymmetric signing.

I really like this idea! Affinidi's business centers around making this usable today, moving the trust either to local devices or their cloud. Quite an intriguing way to earn money and come closer to the goal of getting it in the hand of non-tech people.

Do you know how fast you were developing?

Godhart's Law

What really caught my attention early on, was that he included Godhart's Law, which I never heard of before.

Unsurprisingly, Markus Walker started off with DORA metrics, which I already knew of, but expected a bit more hands-on or usable parts. But he did bring up another idea, SPACE, which I never heard of before. It is a nice framework to fill the gaps between DORA taking developers and their work into the focus more.
A quite interesting idea of this is to measure especially focus and waiting times. Something that sounds pretty helpful, but is difficult to set up depending on the context.

Interestingly, he mentioned that Google's research strongly suggests that a roughly equal talking time of each team member indicates a well-working team. When the talking time is not distributed equally, members might feel not appreciated or heard. Which can sometimes result in an unhappy and therefore less productive team.

How your .NET software supply chain is open to attack : and how to fix it

The talk mainly focused on NuGet. My “favorite” package manager.

Shockingly, thanks to the implementation of Microsoft, NuGet can be effortless exploited out of the box.

Let's start off with Typosquatting, where one speculates that a developer misspells a package name and downloads a hijacked version of an official library. To get around this, one has to set up signatureValidationMode=require in combination of restricting it to a set of maintained list of trusted owners to avoid downloading a package from an untrusted author. As the NuGet packages are all signed by default, even malicious code is signed by default, making it necessary to also maintain the list of owners manually in the NuGet config.

Another attack vector opens, thanks to the design of the package manager. Because requests to all package mirrors are done in parallel, only the fastest response is actually used. So one can simply create an unlisted NuGet package on the official index with the same name as internal packages. In which case, if the official mirror is faster, it will download the exploited version. To avoid this, you need to set up a package source mapping, forcing internal packages to be only downloaded from the internal index.

In addition, it is a good idea to claim the prefix on the official NuGet index, as documented by Microsoft. This avoids malicious actors hijacking the company name and abusing the name for seemingly trustworthy packages.

As a package can always define custom C# code, an untrusted package can basically execute any code it likes without the user noticing.

Friday — Conference Part II

The second day of the conference was quite disappointing for me. Half of the talks I put on my agenda were either terribly executed/presented or did not give me any value.

Delivering a Successful Tech Demo: The Steps to Follow

Boris Hristov held a pretty remarkable presentation about creating better tech demos. As he is a presentation trainer and founder of a company specializing on it, I was still very impressed.

There are a few key things I will keep in mind for future tech demos that I want to share:

  • Create a strong contrast (before vs. after; problem vs. solution) — if code is involved side by side is a powerful comparison
  • Remove anything that could be distracting from every UI that will be visible from the screen share (Browser, Desktop)
  • Prepare for the worst in any scenario, including pre-recording the demo in case something horribly goes wrong
  • In case of a physical demo if possible, bring a backup device
  • Ensure to have a well-defined cleanup process to ensure the environment is reproducibly clean
  • Make sure the font size and cursor is high contrast and large so everyone either in real-life on a projector, TV screen can see it and also in remote calls with different resolutions and setups
  • Be prepared for the worst and train for these scenarios, when nothing can surprise you, there is nothing to fear

Test-reduction — Doing more with less

When I and most of my fellow developer friends and colleagues think about coverage, we think about code coverage. Apparently, for QA engineers and in general, there are actually two kinds of coverage that are increasingly becoming important:

  • Code coverage →  what areas of code have and have not been executed
  • Test coverage → what risks have been examined, from the user's perspective

Apparently, most of us when we write tests we use error guessing, we come up with test cases and input data. This already covers a lot, but especially falls short on edge cases.

Property-based testing can help solve a lot of the downsides of this approach and should be used in addition to it. It works by trying plenty of combinations to the inputs, ideally with a library that takes over generating also exotic inputs. This way, one can cover more cases out of the box without having to worry too much about it.

Another interesting technique I learned about is mutation testing. Again, a library that tries to manipulate control flow and operators. The idea is to make sure that the tests pass even though the implementation changed. So the expected result is not coincidentally leads to the expected result, even though the changed logic became faulty.

As we can't test all the edge cases and possible scenarios that are out there, there is an approach I am really intrigued about: Risk-based testing. Basically building a matrix of probability of an error in some input, the impact it would have and how much it would bother a user (or caller). After categorizing each of the scenarios, one can write tests for the most impactful things. This allows a focus on what really matters, backed by a proper scale rather than gut feeling.

Into the hive of eBPF!

eBPF basically allows you to extend the Linux kernel of a machine without having to recompile or alter it in any way. Something I heard of but was not aware of how it is basically part of k8s and many other cloud native tools out there already.

In a nutshell it allows you to hook into events provided by the kernel to intercept many things like system calls, network logic etc. Which comes pretty handy for low-level monitoring, load balancing and modification of kernel behavior.

A very neat side effect of this is that one could basically mitigate CVEs for the Linux Kernel (also older versions) before a proper patch is released. This can be done without a host restart or anything, making it quite a powerful use case.

Mohammed Aboullaite also mentioned at the end of the talk that it's definitely not necessary for everyone out there to be able to write eBPF code, but it surely helps to understand how it works. It's just good to know that it exists and how it works on a very high level.

Getting Quality Right

Ridiculously, my biggest learning of the day was that “bulletproof coffee”, where one puts butter inside their coffee, yes, it's a thing. I will absolutely give it a shot, even though it sounds terrible.

The talk boiled down to the SEI Quality Model, which tries to be more explicit than other quality models and definitions for software out there.

Dr. Gernot Starke as one of the creators of the arc42 Quality Model also pointed out an interesting concept: Quality-Driven Software Architecture. Asking very specific questions, following the SEI model or similar, to ensure that the understanding about quality software of client and developers are the same.

A Journey from Internal Tools to Public SDK

The key message of this talk, for me, was a confirmation that dog fooding is the only way to ensure your product is really great. Because nothing is better than working with your product, tools, and libraries to experience it yourself.

Recap

To wrap it up I had a lot of fun at the conference. It was a very nice opportunity to network and exchange with fellow developers from all around the globe. Discuss ideas, rant about common issues, and talk about how things changed over the years.

I truly enjoyed a lot of the conversations and talks, learned a great deal and got a lot of input. While I would rather not drop names or blame here, some talks were really poorly put together or boring even though the topics were really exciting. Fortunately, these were exceptions and I got a lot of valuable information and exchange out of the conference.

ℹ️
It was challenging to wrap up these almost three days, formulate and put into this compact blog post, so, please bear with me if I might have assumed to much prior knowledge etc.

If you found this write-up helpful and also took something from it I'm always happy if you drop me a line. Either as a comment here, on LinkedIn or via mail.

Special thanks