The Tale of An Operationally Excellent Team

“Based on a True Story”

Join us for this short and boring novel of team work, excellence and success.

The Team

It all started on a Thursday morning. The team was happy: an essential refactoring went live the day before, and everything was running smoothly. Like a refactoring should be, internals of the system had been changed, but no external impact was to be expected.

The two Backend Engineers that joined the team recently, Alice and Bob, were quickly grabbing a good hold on the several services they were responsible for, together with the team’s most experienced engineer, Charlie, who was out on vacation. But there was nothing to worry about. Life was good: development was simply continuing as planned.

First contact

A few hours later, one of the team’s stakeholders noticed something weird: there was more information to deal with in the system’s UI than what usually happens. It was not critical though. The amount of information was not that high, just above normal – but it was definitely weird and unexpected. Alice and Bob investigated but couldn’t find anything suspicious, so they decided to to wait and observe a bit more.

In the meanwhile, the product manager, Dave, was also uneasy with the situation and was checking some numbers, trying to understand the situation further. More data was still necessary though, and for that, more time was needed.

Getting Serious

Unfortunately, by the end of the day it was clear that the situation was getting worse. The amount of data kept accumulating, more and more. Something was definitely up, and it was more than just “weird”: it was serious and would cause problems if not fixed.

But like mentioned before, nothing suspicious had been found yet. At this point, given the events were getting serious, the team decided to stop everything and focus on the investigation. It was time to sort this out, and the team was determined to do so.

The Rollback

One suspicion that kept bothering everyone was the refactoring done the day before. It was too close to the incident to be a coincidence. At the same time, no direct correlation could be found in the data, nor in the timelines of the events. There was no proof that the refactoring caused any problems, but also no proof that it did not.

The investigation was moving a bit slowly though, so the team opted for rolling back last day’s changes anyway and call it a day. Investigation would continue on the next day, when everyone would be rested and ready for a fresh start, with a clean mind.

Deep Diving

With data accumulated during the night after the rollback, the team now had at least one confirmation: the refactoring did not cause the problems. The excessive data was still accumulating, even though the old code was now running.

The fresh set of minds showed results though. Some traces were finally found in the logs, pointing to where in the chain of services the problem could be. A careful analysis, nicely pointed by Dave, exposed where in between two systems the data seemed to be fanning out. With the possible culprit in mind, the team knew which service to dig into, even though it was not clear yet what they were looking for.

Looking for the truth

Alice and Bob promptly analysed both the source code and logs of the service. Everything seemed fine in general, but the traces found in the logs before were still repeating, and started to make more sense. It seemed like, at a place where half of the data received should be halted, everything was going through. This hypothesis matched the data collected by Dave. The team was getting closer and Alice and Bob were quite relieved… and curious.

Now, what could be causing that behaviour? It was definitely not in the the service itself, or at least no evidence of that could be found. Further investigation pointed to a different service that offered the information used to halt the data. And after more digging finally the answer: the downstream service was always replying with go ahead signals.

Just kick it with your boots

At the end of the day, the team responsible for the downstream service mitigated the problem with a classical manoeuvrer: by restarting it. This didn’t stop our heroes, Alice and Bob, to keep looking into the problem. They were really curious to figure out what was behind all the trouble they just went through.

Alice and Bob went to the source code of the downstream system and checked it for problems. It took them sometime, since they were not familiar with it, but they found out the exact problematic line and communicated that to the downstream team.

No, not again please

Crisis averted, time for some due diligence. The first thing was to write a Post Mortem, to ensure everything was investigated and documented around the problem and with that avoiding it from happening again in the future. Alice and Bob took care of this quite fast: during the whole time the incident was on going they were actually already documenting facts and a timeline of the events. From that to a full Post Mortem was a quite easy step.

In the end, the team now had a good idea of what to do to avoid such cases again in the future. Other then actually getting the root cause fixed, they also identified that the notification of the problem was broken: instead of being made aware of the issue by stakeholders, they would feel much better if there was some alert firing. This would allow the team to act in a more pro-active fashion.

Rollback the rollback

With everything under control again, and a plan to keep it like that, it was time to go back to business. There were a few things to be kept in mind. They would have to re-deploy the refactoring from the beginning of this tale. They would also have to work on action items from the Post Mortem. Finally, feature development should resume.

The whole team knew what to do. Together, they planned how to act on all of the things mentioned above and, in a few days, everything was back on track. The incident handling was a complete success, from beginning to end, and they were ready for the next one. They were just curious to see what Charlie would have to say about all of that when he got back from vacation.

Intuition failure

The story is not only composed of our heroes and people that got directly involved or affected. Like a true team of teams, everyone wanted to help, one way or another. Another one of the team’s stakeholders came up with an idea: how about having specific days where deployments would happen, like Mondays, so that the team would have more time to act on eventual problems?

While this kind of idea is quite common and pops up back and again in the industry, the team knew that it was not the right thing to do. They politely thanked and rejected it. In the end of the day, they knew they would be ready if anything happened at anytime. On top of that, this incident was not even caused by a deployment in the first place.

We are excellent, we move forward!

Another, perhaps non intuitive, fact was that the team got out of this stronger than before. They were more confident they could handle problems even when Charlie was not around. They knew they could navigate through all the team’s services, even though they weren’t part of writing most of those systems. And they knew that, if necessary, they would receive the support needed to investigate issues and fix root causes, without pressure.

When anything happens in our lives, and it is something negative, we can feel a bit down. In a work life, this could be an incident, that will then impact other people. Maybe even cause monetary loss. But if we look at such events through positive lenses, there is always a bright side. They are always opportunities to learn and to improve, and to get out of it stronger.

I am myself quite curious to see what awaits our heroes in their journey in the future. What about you?

Posted in agile | Tagged , , , , , , , , | Leave a comment

Doing my Bit Against CoronaVirus

We are all in a once in a lifetime kind of situation, and unfortunately it is not a good one. Our lives are changing, our work is changing, and we can’t really know what comes next. When we are out of this crisis, the world will be different. Living in Germany right now, it is only fair that I quote Angela Merkel:

Since German reunification, no, since the Second World War, there has not been a challenge for our country in which action in a spirit of solidarity on our part was so important.

Original article.

Even though she is focusing in Germany here, the problem is obviously bigger. Being a Brazilian living in Germany, I also keep an eye on what is happening in Brazil. It is curious to see the Pandemic in an earlier stage there, and hopping that the country deals well with it. And this is taking only two countries into consideration, the ones that are closely tied to me. Sad thing is, if you ask random people about how are things going for them, you have a huge change of getting the same kind of perspective, with two different countries: wherever they live, and wherever they are originally from. Or maybe from wherever they have family and friends of any level. This is global. This is for all of us.

Now, if you are anything like me, you don’t feel like doing nothing and waiting for it all to end. You want to help, to do your bit. And so I am doing what I can, which is surely not enough, but definitely better than staying put. Let’s go through a few things I am doing, from the obvious to some more specific to me.

Home Office

I’ve been working from home since a few weeks now. I have to admin that I’m quite lucky here, since remote work is something that fits quite naturally in my job, plus I worked from home a couple of years already in the past. And the teams I’m working with at the moment all reacted really well with this new reality: just about everyone is actually working remotely, and sometimes feeling even more productive than before.

Part of my home-office setup

This is not important only to preserve your own health, but also to avoid spreading contamination. Imagine you are one of the lucky people that gets the virus and develop immunity to it really fast. Even then, you can transmit it even before you knew you had it in the first place. And one of the people to which you do transmit the virus could end up being someone not as lucky as you. Better safe than sorry here.

Unfortunately there are lots of people whose job won’t allow for, or even make sense at all, in a remote setting. To help increasing the safety of those people, if we can, we should work from home.

Stay at Home!

This is a continuation of the last point, and is something that all of us can do, to some degree, no matter what. Even if you have to go out and work, the rest of the time you can stay back. So, basically, you do exactly that: You. Stay. At. Home. Some people still have to go out to work, and most of us have to at least go to the market from time to time. Other that than though, you should be able to stay at home.

We are taking this very seriously in my family. It is not always easy, especially with small children at home. But it is the right think to do. The reasons are basically the same as before: even if you are not in a risk group, you shouldn’t help spread the virus. It makes me really sad when I see pictures of people still going to parks, for example, in big groups. It also explains why Germany had to forbid it. And in Berlin, for example, the restrictions have been extended for at least until April 19th.

Donate Computing Power

On the longer term, a cure and/or a vaccine is what we really need. There are lots of researches working on this topic out there already, but it takes time. I’m not a doctor nor researcher and thus cannot help directly with this. But there is something I can do: donate my desktop computer’s computational power.

There are probably a several different projects available for that. The one I use is called Folding @ Home (FAH). It allows researchers to define work units that are then distributed to, literally, thousands of people around the world. Normally, FAH is supporting research regarding Cancer or Alzheimer’s, among other diseases. Covid-19 is now part of this list.

Supporting Covid-19 is new though, so even if you look at the official list of supported diseases and don’t find it there, behind the scenes work units for it are being distributed, as long as you keep the default configuration of supporting research for Any diseases. This is a screenshot of what my PC was doing one of these days:

FAH Web Control page – http://localhost:7396/

Since I have a reasonable PC at home, that I don’t use that often, it is an easy decision to contribute. And since I built this PC for gaming, it also has a Graphics card that can be used for FAH’s purpose as well. Just a small caveat. The client is quite easy to configure, but in my case it didn’t detect the my graphics card by default and I had to find some help in the forums. It was worth the extra trouble though, as it seems that GPUs are yielding way more computational power than only CPUs. At least if the points system is a valid way of understanding how much computation you are actually donating 😉

Finally, if you want more detailed information on how this whole thing works, this post from Hackaday is quite interesting and informative.

3D Printed Face Shields

Another way to do your bit is to help people that have to expose themselves, like doctors and nurses. In that direction, Prusa Research started an awesome initiative: to 3D print face shields. You can find more information here.

Since I have a 3D printer, I decided to collaborate as I can – i.e. by 3D printing such devices. Now, 3D printing is only part of the game: one must assemble the whole unit and figure out where exactly they are most needed and distributed them. This would be too much work for a single person and, after some search around, I found the Maker Vs Virus group. There, hundreds of people around Germany are gathering together, virtually of course, to connect all the dots – conversations are in German, though.

A batch of prints ready to be sent for assembly


We are all in this together. There are loads of ways to help, but those depend on your specific situation. Look around, do some “googling”, and you might find some inspiration that suits you. We will get through this way better if everyone does their bit. It doesn’t require too much from each one, but the results of the community effort can’t be anything else but awesome.

Keep on fighting, stay at home and stay healthy.

Posted in misc | Tagged , , , , , , , , , , , | Leave a comment

Scala World 2017


Scala World 2017

So this year I managed to go to Scala World again and, like last year, it was awesome. Also like last year the event featured some extra activities, outdoors. And unlike last year, this time I participated in one of them: the Sunday Hike. It was a great opportunity to exercise a bit, walking over a mountain – and to do some networking. In the end I was really tired, my knees were hurting, but I also met nice people and had great conversations. Plus the views were just awesome.


The conference itself started on Monday, with a short introduction from Jon Pretty, followed by a keynote from Edward Kmett with nice content, delivered with Haskell examples. I must admit I got lost in the middle of the talk, so it is good to know that the recording should be available at some point 🙂


During the first day of the conference we had talks about Lambdas, Matryoshka, functional composition and functional APIs, and also some more practical topics like the changes that are coming to the Scala Collections in Scala version 2.13 and the Dotty development and where it is right now. This day also presented an opportunity for me to get in contact with some terms that I was not fully aware of, like `anamorphism`, `catamorphism`, `Fix` and `hylomorphysm`. Not that I fully understand all of that, but now at least I know that they are out there and where to look when I face them again in the future.

To close off the first day, we had a very nice diner, with beer and an interesting “random talk” by Viktor Klang, with fun anecdotes from his 10 years of involvement in the Scala ecosystem. Or something like that 😉


Day 2 was a good continuation for the first day, with lots of good talks. Daniela Sfregola’s Category Theory introduction was specially interesting to me: it is nice to see introductory talks and workshops covering more advanced topics, and to see lots of people interested in it. Lots of people attended those talks. It is also nice to see that some of those topics are not really that complex, they just have scary names.

One extra highlight for me was the fact that the conference this year featured quite a few female speakers. It is really nice to see diversity increasing, especially in advanced conferences like this one. Kudos to the organization, and to all the women that were there talking!

In summary, the event was a bit small, but awesome. It is just a pity that it will not happen again next year. Perhaps in 2019, but that remains to be seen. Fingers crossed!

You can see my pictures from the Sunday Hike and the event itself on my flickr album.

Edit: the videos from the conference can be found here.

Posted in scala | Tagged , , , , , , , , | Leave a comment

A plea for small Pull Requests


A Plea For Small Pull Requests

Pull Requests (PRs) are the norm today when it comes to common software development practices in teams. It is the right way to submit code changes so that your peers can check them out, add in their thoughts and help you create the best code you can – i.e. PRs allow us to easily introduce code review to our development process and enable a great deal of teamwork, while also decreasing the number of bugs our software contains.

There are several aspects we can talk about when it comes to Pull Requests and code review. In this post, I’m specifically concerned with the size of PRs, although I’ll briefly touch other points as well. Other dimensions you could think about include having a good description of what is being done and why, and being sure that the Pull Request only changes one thing and one thing only, i.e. it is independent and self-contained.

On a personal note, I think Pull Requests nowadays are so important that I even use them on projects where I work alone so that I can have automated checks applied before deciding to merge into master. It allows me that extra opportunity to catch errors before it is too late. In GitHub for example, this generates a nice visual summary of the checks performed. And yes, you could do this straight into your branches, but using PRs is easier and more organized. You could for example easily decline and close a PR, and document why you did it. In this PR in one of my pet projects for example, you can see Codacy, Travis CI and CodeCov checking my code before I merge it to master.

Having said the above, it is way too easy to get carried away when developing and you may end up adding several small things at once – be it features or fixes, or simply some refactoring in the same PR – thus making it quite large and hard to read. And don’t get me wrong: crafting small, self contained and useful Pull Requests is not easy! Good developers don’t create big PRs because they are lazy: sometimes it is hard to see the value in going the extra mile to break what has already become way too big.

Another aspect to consider here is related to git commit good practices in general. Having small Pull Requests will also help to have small and focused individual commits, which is very valuable when maintaining code. Let’s illustrate this point with an example that happened recently to me.


Can I revert this?

I was investigating a bug, something that used to work well and that simply stopped working out of the blue. After some time and investigation, I found that the relevant code was simply removed, and that we didn’t notice it beforehand because of yet another bug. Obvious solution: go through the git history and just git revert the deleted code. Except that I couldn’t find any commit related to it.

After further investigation I finally found the commit that removed the files – but it was a commit that also did several other unrelated things. git revert was no longer an option, especially due to the rest of the code that had been changed at this point, and I ended up having to manually add the files myself. The total time spent with this became way more than it could have been.

Why are big Pull Requests a problem?

The first and most important thing to note here is our human capacity to hold knowledge in one’s head. There is a limit of how much information you can keep and correlate at once, while at the same time weighing all its consequences in the rest of the system, or even for external / client systems. This will obviously be different for different people, but is a problem at some level. And when working in a team, you have to lower this bar, to make sure everyone can work at the same level.

When you are reviewing a Pull Request, you have to keep some things in mind, such as:

  • What are the new components being created?
  • How do they interact with existing components?
  • Is there code being deleted? If so, should it really be deleted?
  • Are the new components really necessary? Perhaps you already have something in the current code base that solves the problem? Or something that could be generalized and applied to both places?
  • Do you see new bugs being introduced?
  • Is the general design OK? Is it consistent with the rest of the project’s design?

There are quite a few things to check. How much of that can you keep in your mind while you are reviewing code? This gets harder the more lines of code there are to be checked.

So back to small PRs. While all of this has little to no impact for automated checks and builds, this can actually have a huge impact when it comes to code review. With that in mind, lets go through at least a few ideas you can use to escape the type of situation where you don’t really feel you want to break your PR into smaller pieces – but should nonetheless. There is no black magic here, we will just use some nice git commands in a way that helps us achieve our goal. Perhaps you will even discover a few things you didn’t know before!


Sort your imports

I prefer sorting imports in alphabetical order, but the actual criteria doesn’t matter, as long as the whole team uses the same technique. This practice can be easily automated and avoids generating a diff when two developers add the same import in different positions. It also completely eliminates duplicated imports generated by merges.

Sometimes this will also avoid conflicts where two developers remove or add unrelated imports in the same position and git doesn’t know what to do about them. Sorting imports makes them naturally mergeable.


Avoid frequent formatting changes

This happens a lot, especially if you don’t use code formatter tools like scalafmt or scalariform (or whatever is available for your language of choice). Sometimes, you may see a blank line you don’t like. Or you don’t see a blank where you believe it should be. You simply go on and delete or add it. This means yet another line change that goes into your PR.

This is not related only to PR sizes. This small change has a big chance of creating conflicts if you ever have to update your PR before merging. Another developer might legitimately change a certain code point and you now have to very carefully check if a change was only cosmetic and thus can be ignored, or if there was something real there to consider. More than once I’ve seen features simply vanish because of this kind of thing.

If you really want to make some formatting changes, do so, but send it as a separate PR that can be merged as soon as possible, and independent of any features. And consider automating this task as well.


Allow reviewers the time to review

This is a little meta, but important nonetheless: resist the urge to want your code merged right away. I suffer from this myself from time to time, especially when we have some very small PRs. Still, the reviewers should be allowed time to work. If you did a good job of making it small and self-contained, and added a good description to the PR body, you will likely get some speedy feedback.

To better explain this it is worth quoting a teammate, who once said:

Sometimes it feels like we are asking for thumbs, not for reviews.

If you sense something like this is happening, you should stop. You are probably rushing the review process, which will only result in some stress and badly reviewed code. My rule of thumb is to not ask for a thumbs up, quite literally. Every time I catch myself doing so I stop and rephrase, asking for a review instead.


Advanced and powerful: manipulating your sources with Git

Now for the more complex (and perhaps interesting) practices. What follows will require you to have at least an intermediate understanding of git, and a prerequisite of not being afraid of git rebase. As a side note, I say this because most of us are afraid of it (git rebase) when we first begin learning. This is only temporary though, until you fully realize the power it gives you.

Lets now think of the following scenario. You are working on a feature, and suddenly notice that some kind of side change is required. Something not strictly related to the feature itself, but would be of great help for your task. You might then get the urge to simply go on and do it, together with your current feature code.


Side changes with Git Stash

See the problem already? If you simply do it, the PR for your feature will get bigger. It will also now contain one (or more) extra concerns, meaning that the reviewers would have to verify this as well.

Instead, you should send this side change as a new PR. There are a few different ways to do this properly with git, but the easiest is to use git stash. What this does is hide your current changes, and let you work with a clean workspace. Then you can switch to a new branch, implement the side changes and submit the PR request.

With that, your teammates can start reviewing these changes immediately while you are still working on the feature itself. Moreover, they will also be able to leverage those changes in their own code – who said that these changes would be useful only for you? And finally, it also gives your colleagues the opportunity to point out problems sooner rather than later. Perhaps something is incompatible with someone else’s work, or another developer had just started to make the same kind of changes and now don’t have to do anything. You can work together to achieve an even better result. Not to mention that this should a small PR, so quite easy to review.

After the PR is sent, you can recover your work with git stash pop. When you move to a new branch, you can get your changes back and start working. Now here there is yet another problem: how to deal with the fact that your side changes are probably not merged yet?

First, the problem in principle is not that big. The side changes are in their own commit, and thus your main changes are completely isolated. If at anytime you get feedback and have to update the PR you just sent, you can always stash your current changes again. Again, see the git stash documentation for more information on how this works.

Second, it might be that your PR with the side changes will simply be accepted as is and merged. In this case, it is quite easy to get your feature branch up-to-date. A git rebase master (or whatever branch your teams merge to) should do the trick. This is probably the easiest (and safest) variation of git rebase you can use. See the git rebase documentation here.

Finally, some pointers for the most complex case. You may find that you will have to fix many things on your side changes PR. Also, at this point you have already made a few commits towards the feature you are implementing. You can use your imagination here and a nifty combination of git features to solve your problem. For example, you could try the following steps:

  • Wait for the side changes PR to be merged to master
  • Update your master: git pull
  • Create a new branch, based on master: git checkout -b my-new-branch
  • Go to your feature branch and carefully use git log to find which commits you used for the feature
  • Go back to the new branch
  • Use git cherry-pick to move the commit over that you found with git log

See the git cherry-pick documentation here. Notice that you can also cherry-pick a series of commits, instead of one by one, if you prefer. This also allows you to use the commit you sent as a new PR already, perhaps in a temporary branch where you add your feature code on top of that.

As you can see, git is a very powerful tool and offers you many ways to solve your problems.


Splitting up code into multiple PRs

The next scenario is that moment when you’ve already gotten too excited with your code and couldn’t stop, and ended up with a huge pile of changes to throw at your peers’ heads. In this case, it can be very easy to simply go and say something like:

Sorry for the big PR. I could split it into smaller pieces but it would take too long.

Let’s go through some ideas to avoid this scenario by applying a little effort and splitting up your work.

First off, if you have well-crafted, individual commits, those can be turned easily into PRs with git cherry-pick. You can simply write down which commits you want to submit as new PRs, move to a new branch and bring those commits over with git cherry-pick. You can combine this with git stash to make it easier to deal with uncommitted code, like described above.

One small drawback is that sometimes your changes are dependent between each other and you might have to wait for the first one to be merged before you can really send the second one. On the other hand, if the first commit is small, chances are that it will be approved quickly, like we have already mentioned.

The whole process might not be too pleasant for you at first, but will definitely help the rest of the team. A small tip that might sound obvious is to “pre-wire” your PRs: go to your peers and let them know that those PRs are coming and what they are about. This will help them review your code faster.


A note about failure

It might all be beautiful on paper, but in reality this is not always possible. Even if you follow the tips presented here, you may still end up with big PRs from time to time. The critical point is that, when this happens, it should:

  • Be a conscious decision, not an accident;
  • Be as small as possible, i.e., you applied at least some of the tips above;
  • Be an exception, not the rule.

Remember: this is all about teamwork. Some things might make you a little slower, especially until you get into the right frame of mind, but it will make the whole team faster in the long run, and will also increase the chances of bugs being caught during code review. A final plus is that knowledge-sharing will also be better, since there is less to learn on each PR, and team members can ask more questions without being afraid of turning the review process into an endless discussion.

If you have read everything up until this point, then perhaps you are interested in reading even more. Here are some further interesting references around the subject:

What do you think? Do you have other techniques that you think could help in creating small and effective PRs? Or do you disagree that this is necessary? Leave a comment with your feedback below!

Posted in agile, misc | Tagged , , , , , , , , , , , , , | Leave a comment

Lots of Fun in Brazil


This year – 2017 if you reading somewhere from the future, between late April and early May I visited Brazil. I obviously took the time to visit family and friends, but I want to talk about something else: I attended two conferences, and I actually had talks in both of them. And despite that, both conferences were great!

The first one was QCon SP, which is a quite big conference and goes to several cities around the world. This was the São Paulo edition. The location of the conference itself was nice, although the neighborhood not so much – it is good that what matters is the time spent inside the pavilion :).


My talk there was about functional programming… in the Java track – which was quite challenging, but fun nonetheless. I also attended a few talks, which I’ll discuss a little bit next. Before that, it is worth mentioning that networking was excellent and this is one of the highlights of the conference. I met several old friends and got to know quite a few new faces.

The conference started with a Keynote from Jim Webber. It was an interesting one, even though very political. He basically talked about the Panama Papers and how you could use Neo4J to analyze the huge amount of information that came with it. In summary, Neo4J is a graph database and by structuring the information in such a fashion (a graph) you are able to see connections in the information that would otherwise be really hard to find.

Next I saw a talk from Kirk Pepperdine about good code: “Better Code, Better Performance”. A bit philosophical in the beginning, but with some interesting points, specially regarding about what is good code, and its relation with the code performance. Bottom line: you should write readable and clean code, and leave optimizations for the Java Virtual Machine – it is very good at it.


And then it was time to see Michael Nascimento’s Java SE 9 for Architects talk. It was quite cool and informative. One highlight was he actually mentioning some bad stuff to come to Java, specifically with regards to the Java SE 9’s modularization support, which seems to be full of issues. In a more positive note, one of the things that is also coming that I really like is the deprecated for removal feature. This means that some old, deprecated and really bad code from the Java API can and will be marked to actually be removed!

Another talk that was very interesting was, in a very loose translation, “Transaction Authorization in Nubank: consuming services from the 1980s with modern technologies”, from Lucas Cavalcanti and Luiz Hespanha, from NuBank. They relayed very well the difficulties of implementing the integration with credit card companies, specially with Mastercard, which is the brand they work with. Specially interesting and surprising was the fact Mastercard requires integrators to have an actual physical cable running from their offices to the integrating company’s office, plus the usage of a black box server provided by them! Also, the communication with this server is done with sockets and a binary protocol following the ISO 8583 standard. Lots of new-old stuff to me.


This all was on the first day of the conference, which was when I also had my own talk that I already mentioned. It was really fun and you can see the video here.

The second day started with a keynote from a Security expert from Slack: Security as Development by Ryan Huber. It was a lightweight talk, where he told his story of getting hired and improving Slack’s security, plus some concepts like SecOps, a red team exercise that was actually a real attack and a real red team exercise. All in all, I always liked Slack and seeing this talk was a reassurance that Slack is indeed cool and is going in the right direction.

One of the most interesting talks to me in the second day was Phillip Wadler’s Theorems for Free, Blame for All – it contained lots of math, some “crazy” symbols and theorems… and Haskell, of course. He also mentioned lambda calculus and said something simple but worth quoting: “Don’t be afraid of math” 😉 Finally, in the end, he did his signature move and turned into the “Lambda Man”. I knew about this but never saw it live – it was actually quite surprising and weird… And fun!


I then saw a talk from the famous (at least in the Scala community) Bodil Stoke, about a “Perfect Language”: The Realist’s Guide to Language Design. She went through several different languages and talked about their good and bad design decisions. Perhaps because she is known in the community I had quite big expectations for this talk. Perhaps also because of that, I was disappointed, as  I was expecting more practical guidelines on language design. Anyway, you can, when it is available, see the talk and decide by yourself – this talk and also Phillip Wadler’s were both in English.

There were other talks in all kinds of topics, including cryptography and compilers, but I’ll stop here. You can see several of the talks from this conference in the InfoQ web site here – just bear in mind that the majority of them are in Portuguese. At this point, I was a bit tired already and basically took the rest of the conference for meeting people – the good old networking that I already mentioned previously in this post.


TDC Floripa

Now lets talk about The Developers Conference, specifically about the edition that happened in May 2017, TDC Floripa, that I went to. This conference goes to a few different cities in Brazil and I used to go to most of them when I lived there. Since this is not possible anymore, I used the opportunity of being in the country already to attend the Florianópolis edition – which is the one I like the most anyway.

In this conference I also attended several talks, but differently from QConSP, I will not talk about them. I will rather concentrate on some activities I did there. The first one was to coordinate the Functional Programming track. We had people talking about all sorts of languages and concepts, from Haskell to Javascript. I also talked about Scala myself, specifically about dependency injection where I mentioned Grafter, the framework my current team at Zalando uses. It was lots of work but also lots of fun.

Switching topics a bit, the exhibition floor was quite interesting. Intel had a great booth with gaming related things, including Virtual Reality Glasses. It was the first time I played a game with such a technology – 3D glasses and Virtual Reality, and it was certainly way better than all my expectations – the immersion is really awesome! Great stuff! If such equipment was not so expensive, I would for sure buy me one of those glasses. Right now though, a HTC Vive, which was the model I tested, is being sold by € 899,-


Another hot topic for me was the makers area, where some people were, among other things, displaying 3D printers. I had seen some of those before but for some reason this time it got me really excited, so much so that I ended up buying one for me when I got back home. Follow me on twitter if you are curious to see what I’ll do with it – I’ll certainly share some things related to that. Plus an assembly video will come, but in Portuguese (sorry :P).


Finally, the next thing in for me was a talk I gave about working abroad. It was almost a last minute idea, and it went pretty well. I shared a little bit about my experiences of working outside of Brazil, and lots of people seemed really interested. For good or bad, the country is certainly going to lose lots of great developers in the near future.

In summary, it was great to visit Brazil, and it was specially good to be able to go to those conferences, to talk there, meet people and see interesting things happening. I hope I will be able to go back next year!

If you are curious to see some of that, I have uploaded my QCon SP pictures here and my TDC Floripa pictures here.

See you next time!

Posted in scala | Tagged , , , , , , , , , , | Leave a comment

Scalar Conf 2017 – A quick visit to Warsaw


And so I got a last minute opportunity to attend Scalar Conf 2017. It was the first time I attended to this event, and also the first time I went to Warsaw – which is where the event took place.

All in all, the event was good. I went with a few other Zalandos, and we had a booth there. One of the most interesting things overall were all the conversations I had with several people that visited us. Among the topics discussed were our Tech Radar (which we were displaying in our booth), people curious about the Eff Monad and several points about how is it to work at Zalando.

I did not watch all the talks since, like hinted above, the discussions were the most interesting part of the conference, and I wanted to spend time in our booth. I did nonetheless watch a few of them, which I’ll discuss a bit below.

The first talk I watched was the first one of the conference: Dave Gurnell’s Adventures in Meta Programming. The topic is basically something he seems to be diving deep into in the last few years, which is meta programming in general – as can be noted from other conferences where he talked about Shapeless for example. This time, the focus was not only Shapeless, but also macros, and when to use one instead of the other. I especially liked the terms “Typey stuff” and “Syntaxy stuff” that he coined to point out what is better for which kind of scenario. In summary, use macros for things that are “syntaxy things”, and Shapeless for things that are “typey things”.


Next there was a talk about type classes, from Andrea Lattuada: Typeclasses, a Typesystem Construct. I watched just part of it, since it was targeted at beginners. If you are curious, the examples were moving back and forth between Scala and Haskell – and if this topic is new to you, you should probably watch the video.

Another interesting talk I watched was from John de Goes: Quark: A Purely-Functional Scala DSL for Data Processing & Analytics. The main message from this talk was that functional programming is a better way to deal with Data Analytics in general, in contrast with the way Apache Spark does things, which causes lots of problems – even though it is productive to start with. Instead of having computational lambdas, we should instead decouple the computation description from actually executing such computation. A nice quote from him that goes in this direction is: “We are lazy functional programmers”. If you are curious about what he is doing there, other then checking the video of his talk you can also check the project’s Github page at


The conference also featured an interesting talk about monad transformers from Gabriele Petronella: Practical Monad Transformers. It was very interesting for anyone that want to understand what such tools are for and what are the problems that come with using them. A highlight of this talk was towards the end, when he was talking about alternative tools and mentioned the Eff Monad – which is a framework that is quite new but we are already using in my team at Zalando. Perhaps this also explains a bit about why so many people came to the Zalando booth curious to ask about Eff.


There were other interesting talks, but the last one I want to mention is Gatling distilled, from Andrzej Ludwikowski. I wanted to have a look at Gatling for a while now, so this talk came in handy. He introduced what Gatling is and gave a few tips. A few take-aways for me:

  • Gatling can also be used for integration tests;
  • it has a nice DSL;
  • you shouldn’t use the recorder;
  • assertions can include response time limits;
  • several different data sources can be used;
  • remember to turn on logging, to understand what is going on.


During the whole conference, the organizers had a couple of flip-charts up with questions like which frameworks do people use for persistence, among other things. If you are curious, there is a post on the conference blog about this here where they go through all the questions asked and the results.


In summary, the conference was interesting and it was certainly worth the time going. The downside for me was that there were too many Akka related talks, and as you can see from my selection in this blog post, I’m not exactly interested in seeing too many things in that direction. I understand that Akka is important and cannot be left aside, but having five talks about that, plus a couple of others that were also indirectly related was a bit too much.

That being said, I hope I’m able to attend Scalar Conf again next year. If you want to see a bit of how it was, I uploaded my pictures to Flickr. And you can check all the conference talks in this Youtube Playlist.

Posted in scala | Tagged , , , , , , , , , , , , , , , | Leave a comment

Six things I learned at Scala eXchange 2016

2016-12-09 14.18.43

In December 2016 I attended Scala eXchange 2016 – a traditional and quite interesting Scala conference that occurs every year in London, UK. I had the opportunity to attend it for the 4th time, and like every previous one, it was well worth the effort. In this post, we will take a pragmatic look at six things I learned during the conference. We won’t talk about the beer though – you should go there next time if you want in on this 😉

By the way, all talks are available on the conference web site linked above, so be sure to check it out. After you are done with this post, of course!

Number 1: Compilation time is still an issue, but it is being tackled

Compilation time was the subject of at least three talks during the event. The first of these was the keynote by Adriaan Moors, team lead of the compiler team at Lightbend, where he made it clear that they plan to spend half of their team efforts during 2017 improving the situation.

Next up was the controversial “Compilation time: a bigger hammer”, from Iulian Dragos and Mirko Dotta, from the recently created Triplequote company. They presented the tool they are creating for speeding up the compilation times, based on parallelization of compilation units. The whole topic was a bit controversial because the tool is a commercial effort. In one hand, the compiler should be faster by itself, without the support of external tools. On the other hand, the Triplequote developers are investing their own resources, so it is only fair that they can get something out of the effort.

Finally, there was the talk “Can scalac be 10x faster”, from Rory Graves. This one was very interesting and full of tips you can apply right now to your projects to improve their compilation times. We actually tried a couple of them in our project and managed to get some improvements. We specifically changed two things:

* replaced several wildcard imports (._) with specific imports

* split traits and class that were sharing the same .scala source file into separated files

The first change helped a lot in the total compilation time (after a clean). This is most likely due to the fact that with more specific imports, there are less places for the compiler to look for implicit conversions – and we do use implicit conversions quite a lot.

The second point was more useful in reducing the incremental compilation time – i.e., the chances of a given file to be invalidated and have to be recompiled were made smaller.

In terms of compilation time, there was a third point that we looked at briefly: macros. In this case, we wanted to start using a certain macro, to reduce code repetition, but gave up because it was increasing the compilation times quite a lot. For this kind of thing, the best tip we can give is to always pay attention to your build times when you decide to try new features. In our case, it seems that the extra compilation times were coming from a combination of the usage of macros and the usage of Shapeless in code generated by macro. Unfortunately, we never had time to properly isolate and fix the problem.

For all cases, the plan was to share some numbers, but since, like mentioned above, we didn’t have a properly isolated test scenario to share, the numbers were a bit biased. So I’ll just leave the message: do try those tips yourself and see if it changes something for your own projects.

Number 2: Scala compiler fork: a positive thing to have!

You probably already know that there is a group called Typelevel, and that they forked the Scala compiler. When this happened, there was some commotion in the community. Now, during Scala eXchange 2016 we had an interesting talk from Miles Sabin regarding this all, and what Typelevel had been up to, specially in 2016.

The biggest takeaway here is that the Typelevel scala compiler fork is a very positive thing, and has made a positive impact on the Scala community. For example, there are a couple of bug fixes for the Scala compiler that are available in the Typelevel version of the compiler. Moreover, you can use those fixes in the standard Scala compiler with a compiler plugin also made available by the team. Finally, such fixes are also being sent back to the Lightbend Scala compiler as Pull Requests, and some have been accepted!

One such case is the fix for the Scala issue 2712. It was fixed by the Typelevel guys and sent as a Pull Request to the standard compiler.

The fact that the above is possible confirms what Miles said during the presentation: Typelevel has an interesting rule: whatever improvements someone wants to make to the Typelevel Scala compiler, it must also be submitted as a PR to the standard compiler.

Finally, if you want to use the Typelevel Scala compiler in your projects, it is as simple as adding an sbt plugin to your build, as can be seen here.

Number 3: Shapeless is awesome and doesn’t have to be scary!

I’ll not dive too deep in this topic, but if you have ever heard of Shapeless and felt like it is too complex, you have to watch Dave’s talk. It was a very gentle introduction to the awesome framework that lets you do some great generic programming.

We use Shapeless in one of our libraries, Grafter, that offer a nice, generic, and “new old” way of dealing with constructor-based dependency injection, so this talk was useful in practice already.

One note before closing this topic: Shapeless is really useful, but it is intended more as a support to library authors than to application developers. So don’t watch the talk and start using it everywhere! That being said, knowing how it works will help you deal with such libraries, which includes the above mentioned Grafter, and the now famous Circe json framework, among others.

Number 4: The Future of Scala: it is moving forward and will keep doing so

Martin Odersky’s keynote talk, “From DOT to Dotty” was an interesting one for anybody that still has doubts about the future of the Scala language. Is it going to keep evolving?

Dotty is the future Scala. The interesting thing here is that this doesn’t mean it will replace Scala, at least not in the short or even medium term, but instead that it is a place where innovation can be tried out and really happen without too many restrictions, before they are implemented in the Scala language itself, where the stakes are way higher. Quoting directly from Martin’s keynote: “The plan is that Dotty should support future iterations of the Scala programming language”.

Dotty might one day become Scala 3, but this is not yet something that is written in stone – right now, some features that work well in Dotty are actually being backported to scalac – confirming Martin’s words again, when he said that Dotty is there to “support the next iteration of Scala”.

The diagram below, extracted directly from Martin’s talk, show the current planned roadmap – obviously highly subject to change:


A nice feature example, still to come in the (hopefully) near future, is typesafe equality. This is being implemented in Dotty but, right now, you have to use a library like cats or scalaz to get this functionality in the Scala language itself. But in the future, this will most likely be part of the standard Scala language.

Number 5: Zalando growing the Scala community

This is a bit self-served, but it is nice to know that Zalando as a company is growing in the Scala community. In Scala eXchange for example, we were present with six developers, from which two had talks being presented at the conference. The first one was by Joachim, who spoke about Akka Streams, and the second was Eric, with a talk about Practical Eff. Our participants also represented two different locations: four of them came from the office in Berlin, and two came from the Dublin office.

You can find Joachim’s talk here and Eric’s here. And we also had a booth. And lots of fun!

Number 6: Scala is OpenSource. And needs us all

Let us close this post with a call to action, mirroring the message from Heather Miller’s keynote. This call to action could be summarized in a single sentence, quoted directly from Heather’s talk: “we don’t want your money, we want your PRs”.

Scala is Open Source and lives, grows and evolves with help from the community. Its future is not tied to Lightbend’s hands, and to that end they have even created the Scala Center, which is an independent, not-for-profit organization focused on the future of Scala, community participation and the likes.

In the context of community initiatives, a lot was said about the SIP process and its improvements. This is another point where Scala seems promising for the future. Other than that, they are also trying to move to a more friendly platform for community discussions, other than mailing lists, in the form of a Discuss forum which can be seen here: – modern and pretty 🙂

In the end, all of these initiatives aren’t appearing out of thin air. One source of information that she referenced was the book “Social Architecture” ( – already added to my to-read list.

So, in summary: Scala is still hot, and will be hot for a long time to come! And when this time passes, there will come Dotty!

Edit: I just uploaded some pictures here, if you are interested 😉

Posted in scala | Tagged , , , , , , , , , , , | Leave a comment

A Scala Book… in Portuguese

It took me a while. A very long while, actually, but it is finally out: I have written a Scala book! The only gotcha here is that it is in Portuguese. And as far as I know (I might be wrong, though) it is the first Scala book in this language. You can find it for sale here in the Casa do Código page.

The book was written with beginners in mind, i.e. people that never wrote any Scala code before, and one of the main reasons I decided it was worth writing in Portuguese was the lack of learning resources for Portuguese speakers to start learning the Scala language – also, there are plenty of resources covering those topics in English already.

Writing a book is a huge task. When I started, I never imagined how consuming that could be. But it ended up also being quite fun and full of learning opportunities, so well worth the effort. Now it is time to breath and perhaps consider another book… but not in the short term 😉

Posted in misc, scala | Tagged , , , , , , , , | Leave a comment

Implicit conversions in Specs2 gone mad


In this resurrection post I want to talk a little bit about a problem we faced recently at CarJump (how I ended up there is an story for another post) with Specs2 and Mockito. Actually, the issue I want to address is subtle and appeared in a very specific scenario. Lets start describing such scenario.

Disclaimer: the scenario below is valid for specs2 version 3.7.x – with specs2 2.x everything was fine. Implicit conversions defined by specs2 changed quite a bit between those two versions.

First, we have a specs2 test specification. Something as simple as the following:

class ImplicitsSpec extends Specification {
  "my spec" should {
    "do something" in {

This works just fine, it is just a simple specs2 Specification. Next comes adding mockito. In specs2, it is nothing more then adding the Mockito trait to the test suite. Or, and this is where it gets you, wherever you define your mocks. This is the pattern we started to used recently:

object ImplicitsSpec extends Mockito {
  // my common test vals here

Putting that code into words, we create a companion object that will hold all common vals used in the tests. We actually are starting to do this kind of thing in lots of places, and this is the first time it was a problem.

So, how are we to use those common values? Just import the companion object members. Applying this strategy to the spec presented earlier, the result would be something like:

class ImplicitsSpec extends Specification {
  import ImplicitsSpec._

  "my spec" should {
    "do something" in {

Pretty simple and nothing can go wrong there, right? Well… wrong. If you try to compile the code above, you will get an error like the following:

/src/test/scala/com/jcranky/specs2/implicits/ImplicitsSpec.scala:11: type mismatch;
[error]  found   : org.specs2.specification.core.Fragment
[error]  required: org.specs2.matcher.Matcher[String]
[error]     "do something" in {

Wait, what?

What happens is that the specs2 Mockito trait brings into context several other implicit conversions, not only mock related stuff. And what’s more, imported conversions have higher precedence over conversions got from class definitions. In our case, it means that whatever comes from

import ImplicitsSpec._

comes before what we get from extending Specification. In this case, in practice, we are losing a conversion from Fragment to Matcher. At this moment, I couldn’t find exactly where this conversion is, but there is a few workarounds to fix this. The first one is to change the companion object declaration to be:

object ImplicitsSpec extends Specification with Mockito

This will bring all relevant implicit conversions to the same scope. Another common solution would be to declare the test specification like below, and remote Mockito from the companion object:

class ImplicitsSpec extends Specification with Mockito

The problem with this solution is that then you cannot create common mock objects in the companion object. There are obviously other solutions, but they usually get more complicated. Still, if you know the root of the problem, please leave a comment!

Posted in scala | Tagged , , , , , , | Leave a comment

What is the Best forum software out there?

And I’m back! It has been a long while since I last blogged, so lets get going right away!

What is the best forum software available out there?

Not the best question to start with, so the best answer it not optimal as well: it depends. Depends on what you want, on what you are looking for. What I want is something that:

  • is simple and easy to manage – I won’t have much time the manage stuff;
  • is free or at least with a good entry level pricing – I want to create a community, but it has no direct commercial goals;
  • is modern looking and easy to use.

Those are perhaps too much to ask, but lets find out.

The options

Bellow are the forum board software I found, some I already knew about, and others were recommend by friends. The grouping is not random: we will quickly analyse similar systems together.

These are the simpler ones. phpBB is probably amongst the better known of all software that I looked at, and both jforum and Simple Machines seems to be highly inspired by it. Both phpBB and jforum are open source, but I couldn’t really find that out about Simple Machines.

Those three got ruled out because of the third of my requirements: they are old-looking. They have a very dated UI, but could be a good option if you are looking for a well known formula.


Now things get visually much better. All three options above are modern looking and seem quite interesting. NodeBB and Discourse also bring something new to our comparison table: they both offer commercial / hosted solutions, which is great when you don’t have expertise or time to setup your own installation.

I discarded Mamute because it is not much more than a Stackoverflow clone, and I wanted something more community focused, and less Q & A focused. Discussion should be fine and encouraged for what I’m looking for.

NodeBB and Discourse were a different matter. I was really close to choose one of them when I found about the winner – more on that one later. Also, they both have a small problem for me: they are based on technologies I’m not totally familiar with, which means I would have to spend sometime learning. It would not be a big deal, but the winner is really a killer. Finally, the hosted version seemed a bit expensive to me.


A fully commercial option. Good looking and feature rich, lost me on the price point. They base their price on the number of online users – but how am I to know that, considering I’m just starting?

The winner.

Another commercial option. Two features really got me: first, it is embeddable. It can be part of your site, instead of a different or separated thing altogether. Second, it has a great starting price: Zero. You can use the free version for as long as you want, and if you decide to pay, the service levels are not based on the number of users you have, which makes everything that much simpler.

Also, the way Muut organizes information is exactly what I wanted: community focused. And to make things even better: the setup is ridiculously easy. Just drop in a html snippet in your site and you are done. They also have an API and other really cool features!

There are only two features I miss on Muut: sticky posts and locked posts. If the community is well behaved, this shouldn’t be a problem, but we will find out.


What is it that I am doing, you might wonder… Well, I’ve been working on EasyForger ( for a while now. It is starting to get some users, and I would like a way to connect them together. You can see our forums at – it is only in Portuguese for now, though =].

Posted in misc, web development | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment