Twice a year, Machine Learning enthusiasts meet in a German city to attend the ML Conference. In addition to a lot of networking, this event also offers content in different formats. Full-day workshops, short sessions of 20 minutes and longer sessions with 1h, where the subjects are more depth. It is expected from speakers the presentation of problems faced during their project development and how they bypassed them.

Given the tremendous personal and professional development opportunities that the ML Conference offers its attendants, JArchitects could not let this event pass without participating in it, as we did in the last edition of the event (Machine Learning Conference 2018). But this time, our ML Experts had a new task. In addition to exchanging knowledge with other participants during coffee breaks and networking sessions, Jean Metz and Thiago Alves presented the session Honey Bee Conservation using Deep Learning. Since our talk was on the last day, we will show you now how the first event days were.

Workshops and Speakers dinner

In the first day of the conference we hat two options of workshops to participate: An Introduction to Deep Learning with Keras – train your own Image Classifier with Deep Neural Networks by Rederic Stallaert and Xander Steenbrugge, and Machine Learning 101 ++ using Python by Dr. Pieter Buteneers. Both workshops had a hands-on proposal and were performed at the same time throughout the day.

In the end of this full day, we were invited to attend the speakers’ dinner, where we had the opportunity to talk with other speakers, sponsors and the people from the organisation of the event. To complete this experience, we had good Bavarian food.

After enjoying the local food and exchanging stories with the other participants at the table, we walked back to the hotel. Although it was a long walk, we received as a rewarding view of the beautiful river Isar.

Isar river

Second Day

With the batteries recharged after the long day before, we went to the main auditorium where the series of sessions began. After an interaction with the audience made by the hosts, we had the first Keynote.

The Ethics of AI – dealing with difficult choices in a non-binary world

Eric Reiss, in this keynote, tried to awake a more critical view from the attendees about dealing with ethics in an environment increasingly embraced by AI solutions. Among his examples, there were demonstrations of how human biases affect the algorithms judgmental ability.

First keynote

Another high point of Reiss’s presentation was when he described his involvement in the creation of THE COPENHAGEN LETTER in 2017. They directed this open letter to all the people who shape technology nowadays. The objective was the creation of ethical foundations for technologies. Here are some of his excerpts:

  • We must seek to design with those for whom we are designing. We will not tolerate design for addiction, deception, or control.
  • We need digital citizens, not mere consumers.
  • We all depend on transparency to understand how technology shapes us.

Reinforcement Learning: a gentle Introduction and industrial Application

In this session, Christian Hidber briefly introduced us with concepts of reinforcement learning used before and after Deep Neural Networks gaining popularity and the creation of techniques like Deep Q-Learning. To explain these algorithms, Hidber created an RL agent able to play a game based on the classic Snake.

After Hidber shows us the fundamentals of RL, he presented an industrial application based on siphonic roof drainage systems. For this application, he trained an agent to size drainage systems in large buildings like stadiums and airports. With this agent, his company was able to simulate the intuition and hydraulic knowledge that experts need for years to gain.

The more data, the better the AI, isn’t it?

Michael Kieweg presented another session with plenty of useful insights. During the talk, he introduced us to the challenges of using AI to extract information from legal documents.

Among the problems they face during the project development are nonstandard documents, poor quality of the scanned images and different interpretations that users have about what is essential to be extracted by the software. Kieweig demonstrated a lot of knowledge during the presentation, either to answer technical and business-related questions. With no doubt, this was one of the best talks that we participated during the event.

How to implement Chatbots in an industrial Context

Chatbots are reshaping the way clients interact with companies. With them, it is possible for companies to offer 24/7 communication channels to their users without having to hire large support teams. Given this enormous impact that chatbots are creating in the market, several companies like JArchitects are betting on the development of these solutions. But choosing technologies for creating these chatbots can be difficult sometimes, considering the large number and in some cases the lack of maturity of the tools. Thus, the Dr. Christoph Windheuser session aimed to demonstrate how to develop productive chatbots for real-world applications.

In the first part of the session, he presented us the need to create a personality for the bot. For this, the developer needs to define some characteristics that will shape the way the user perceives the chatbot, such as, formality, proactivity, energy and funny level.

In the second part, Dr. Windheuser presented open source and private tools that are currently being used by developers to create chatbots. He also highlighted the positives and negatives of the tools at this moment.

Lastly, he illustrated why it is difficult to conduct tests in chatbots, as well as how to run test-driven development and continuous delivery to improve the performance of the bot through time.

How to track progress and collaborate in data science and machine learning projects?

In this session, Jakub Czakon and Kamil Kaczmarek gave us practical guidelines and tips for creating and maintaining collaborative data science projects. Among the examples, they showed us how to create reproducible pipelines and how to track metrics, hyperparameters, learning curves and dataset versions.

TensorFlow Training on the JVM

Unlike most of the talks in the conference and what is common in the ML world nowadays, Christoph Henkelmann showed how to train and execute a model TensorFlow model from the Java Virtual Machine instead of using Python. This process may be decisive for some companies to explore artificial intelligence opportunities, given that many commercial projects run on the JVM.

As a result, Henkelmann presented to us a method to import models into a Java project and interact with them. In his approach, he creates the models using Python. But once he compiles the architecture, it can be integrated into a Java environment using the TensorFlow Java API. Henkelmann expects that soon, the creation of models using Java will be easier. Although it is possible today, it is still challenging.

Up-close and personal: Hyper-Personalization using Deep Learning

This was a short talk, but Noa Barbiro was able to give us several insights on how to develop personalised recommendations for users, even though if the service has millions of customers as a user base. Among the several models she presented, there was a recommendation service to suggest a sequence of cities during a trip, to create this solution they used data from of more than 50M trips organised using their website.

Third Day

The third day was our day :). Our session was scheduled to start at 11:45 am. For concentrating, we did not participate in the previous morning talks. Even though some of them seemed interesting, like Christoph Henkelmann’s Unsupervised Learning with Autoencoders and Deep Learning with Small Data by Hauke Brammer. But okay, they’ll probably be online just like ours. 11:30, time to prepare the session!

Honey Bee Conservation using Deep Learning

11:45 let the show begin!

In our session, we presented a study case based on Thiago Alves’ thesis. On it, we employed techniques of image processing and ML to help beekeepers in the honeybee colonies assessment task. This task is typically carried out via a laborious manual task of counting and classifying comb cells. Some beekeepers perform this process many times during the year.

The development of this project was not linear. It was often necessary to construct several approaches to solve different problems that came up. Some steps were: to collect images for creating the dataset; detecting cells independent of the class; removing false positives; classifying cells content and developing software that beekeepers could operate. We named this software as DeepBee.

The presentation was of about 45 minutes long. And we showed several obstacles that happens in a real problem and our approaches to bypassed them. We guessed that a session with more technical details would be well received by the public, and it was. We had several questions after the session, and this chat with the attendees followed up until lunch.

Does Deep Learning make Feature Engineering obsolete?

Vladimir Rybakov did the last session we attended. The central idea of this talk was to aware us that feature engineering techniques are still relevant. This process is often necessary even in the era of Deep Learning. This era is known by models that can automatically discover from data which features improves the most the predictions.

Firstly, Rybakov presented some feature engineering basics. Then, based on real-world cases, he showed how to apply FE in different subject fields. At the end of his presentation, we asked him if he believed that soon FE would become unnecessary. He believes that yes, but there are many optimisations possible for creating reliable automatic FE algorithms. They currently require a lot of computational power to produce valid results.


The ML Conference is not yet a big event, yet, and it makes networking more comfortable. We had several opportunities to talk with other speakers and attendants from different countries, either at moments created just for networking or at lunches and coffee-breaks during the days. The sessions’ content were of high quality in general. They gave us several new insights that we will implement in JArchitects to help our goal that is to become a reference in the ML market.