Image by Ahmed Gad from Pixabay

Tech-Life-Analogies: Introduction & Machine Learning

Parallels between technology and my private life

Michael Hoss
8 min readJan 9, 2023

--

As my focus switched from engineering topics to handling my private life, I could not help but draw analogies between these two worlds. It might sound stupid and overly nerdy to regard life through the lens of engineering principles, but for me, it would be a missed opportunity not to do so. If I use great efforts to apply the latest intelligent methods to robots, while thinking that in my private life, everything will automatically turn out fine, then it would be no surprise if I consider robots superior to humans at some point. I will treat myself at least as good as I will treat my robots.

This chapter should not be regarded as a piece of scientific text. Its contents are subjective personal observations that have no general validity.
I am not an expert in most of the mentioned technical topics. My aim is to show how I think about these topics and not so much to give factually correct introductions to them. If you feel lost in unknown technical terms, I hope that Wikipedia or YouTube can provide the necessary understanding to comprehend what I try to express.

I would also like to highlight that when I do the following considerations, I always remember that all those models are wrong, but some are useful. I don’t expect them to be good explanations of my human experience, but I am curious if they sometimes come close and I am grateful if they clarify my thoughts about specific situations. During daily situations, I consider myself lucky to only notice the existence of such analogies, but to not get caught up in them too much. Only when I sit down to think about them in more depth, I am able to write hopefully coherent texts about them.

After having done research in automated driving, I found quite some parallels between how we let our cars navigate through traffic and how I myself navigate through life. This applies to machine learning, the sense-plan-act steps of autonomous mobile robotics, to the interaction of hardware and software, and to the assurance of safety.

Machine Learning

My brain is not an artificial neural network, but even a real one.
It is larger, more general-purpose, and more advanced than anything I have dealt with in the literature so far. With the realization that I am a neural network, many things suddenly make sense to me.

For example, I cannot just program and compile new skills, behavior, or knowledge into myself like I would sometimes like to do. Every intended discrete capability of mine must instead be carefully trained into my mind and body, which is naturally not straightforward because neural networks are generally continuous, whereas my rational thoughts often deal with discrete concepts. Unknown biases in the training data, like for example things I was unaware of when I was younger, end up as unconsciously learned correlations in my mind. When I encounter novelties during inference, I might do just what works out well, but I might also end up performing unexpected and harmful generalizations that relate to these learned correlations. My learning rate now might be lower than ten years ago, but I think my methods for training are more advanced than ever.
Furthermore, the possibilities to train neural networks and reinvent myself seem huge because at least artificial neural networks have mind-blowing approximation capabilities if they are only trained well with the right data.

So if my brain is a neural network, which neural network architectures does it implicitly contain? Sometimes I feel like I work like a generative adversarial network (GAN). Part of me is a discriminator that questions whether the things that my generator proposes really make sense. Based on what the discriminator says, my generator comes up with more advanced suggestions for what I do in life. In this cycle of self-reflection, I hope that both parts simultaneously become wiser as I walk through life and none is able to unhealthily outperform the other.

When I perceive my environment and myself, I also see patterns of convolutional neural networks (CNNs) inside me. My convolution kernels generally become more advanced over time for the things that I care about and for which I have meaningful data. A simple visual example would be recognizing a rare object more easily after having seen it multiple times recently. Furthermore, I see principles of CNNs in my perception of abstract art. My input layers can map the superficial geometric elements of a piece of art to a representation in a latent space somewhere in the middle of my brain. My deeper layers would then actually try to map this latent space representation onto something meaningful from the real world that I have seen before. For abstract inputs, I think that this only works partially. On the one hand, the latent space representation of something abstract is probably too far away from semantically meaningful regions of the latent space in order to get mapped to a definite label. On the other hand, I believe that such apparently meaningless latent space representations can still be located in regions that invoke feelings in me. If I see an abstract painting that mimics whiskers and pointed ears, I could feel my connotations with cats even though I don’t detect one. I explain this by the fact that I have had clear-cut and conscious labels for the classification of a cat, but my feelings about cats have somehow sneaked in as correlations along the way without any clear-cut borders.

In addition to the mentioned network architectures, I also think that my brain works a bit like a recurrent neural network (RNN) when I play along songs or drive along a road. The more song or road information from recent time steps I have observed, the better I can predict which part of the song or road comes next.

When I learn new capabilities, I think that the concept of adapting pre-trained networks to new data in related contexts is helpful in real life, too.
For example, when I do a new type of sports, I benefit from having done similar, but different types of sports in the past. In this regard, I think that the message of the book Range by David Epstein is in accordance with the practices of machine learning. For example, when Roger Federer started playing Tennis professionally, he already had a number of pre-trained networks to rely on from the sports that he did as a child. Such preconditions likely accelerated his specific training efforts through generalization advantages that players who only trained tennis from scratch never had.

I observed something similar at myself. When I started playing the guitar, I already knew playing the drums. In this situation, I believe that my brain re-used as many pre-trained layers as possible and only came up with a few additional neural structures to direct the existing capabilities to a new application. When some of my hidden layers for playing the drums and the guitar are actually shared, then I believe that I automatically improve parts of my drumming even when I only practice the guitar! In this example, the subject matter of the shared hidden layers is rather straightforward. For example, it could be keeping the rhythm in a song. However, I also believe that we can re-use those parts of our networks that are much more abstract and less conscious. For example, my habit of tidying up or not tidying up my room might somehow be neurally shared with the order or chaos in emotional and social aspects of my life.

I guess that somewhere in my brain, there exist hidden layers that apply very generally to various activities in my life. If they have a large impact, then the potential to take care of these layers must be huge. To unlock their potential, I think it makes sense to deal with spirituality and feelings that are too deep to put in words as a key to the subconscious. For similar reasons, I am a fan of abstract education topics like mathematics. They might not directly serve any practical application, but they train my abstract thinking and equip me with the capabilities to adapt myself more easily to many applications once my brain creates the necessary application-specific additional structures. But of course, if no usage in an application ever takes place, then the valuable general layers remain useless.

These beliefs of mine are also my motivation to use my insights from technology for my private life and vice versa. I feel empowered and unified as a human being when I discover that I can re-use parts of my brain for otherwise separated life areas.

From my perspective, such views on machine learning can also be readily applied to psychotherapy for burnout. Depending on the mental depth of the personal difficulties, or respectively the depth of the brain’s neural layers that benefit from retraining, different therapeutic approaches seem to make sense. If I have learned unfavorable things in my childhood that have subconsciously complicated my life ever since, then I guess the methods of depth psychology have the greatest potential because, as far as I understand, they directly target the deeper layers of my brain. On the other hand, if deep down, things seem to be working out fine, but problems are located in the more superficial layers closer to the input/output of my brain, then I guess behavioral therapy can more likely do the trick. However, if I apply only superficial approaches to problems that are located deeply, I could turn crazy while I try to compensate the stiffness of the untouched deeper parameters by hopelessly manipulating the remaining trained superficial parameters. This is why I like to specifically identify those bottleneck parameters whose specific retraining requires the least effort for achieving the same overall improvements.

If this white-box approach is too cumbersome, I could also just see the brain as one whole unit without any subdivisions and be agnostic to its internals. Such end-to-end approaches would be my ideal, but if they show unfavorable outcomes, then the levers for improvement are usually very incomprehensible. I ideally want to become somewhat aware of how my mind and feelings work, but achieving this in a general sense seems pretty difficult to me, just like explainable AI is also not easy. In the end, I am not only a fan of training my body, but have also started to see a point in training my mind.

This article is an excerpt from my booklet Burnout & Recovery — Inspiration, Reflection, Bureaucracy & Analogies (free PDF at DNB).

Continue reading: Tech-Life-Analogies: Autonomous Mobile Robotics

Disclaimer

The sections of this booklet that describe my interactions with other persons are only my subjective impressions and should not be seen as objective descriptions. Since my private circumstances changed over the course of writing (summer 2021 to summer 2022), not all contents still apply at the time of publication. I intentionally avoided excessive re-writing of such contents to prevent overproducing.

--

--

Michael Hoss

Curious about how this world works and excited to play my part in it.