Using Rosetta with UTM and Docker

For a recent robotics project I had to run a ROS/Gazebo pipeline that was based on a pre-built x86/amd64 docker image.

I do most of my local development work on an M1 Mac. The M1 is an ARM based processor, so the question was how to get the stack to run. Unfortunately Docker for Mac was not an option, because it wouldn’t run Gazebo properly. Therefore I knew that I had to go with Linux and somewhere between the OS and the Docker I needed something (qemu or similar) to go from the x86 to the ARM instruction set.

The real breakthrough was the realization that if you use Apple’s virtualization framework to run a Linux VM you can use Rosetta (Apple’s own processor translation layer) within that VM and this this blogpost helped tremendously.

Maybe I will do a follow-up on how exactly I made it work, but the basic setup is: Debian12 in a VM (utilizing Apple Virtualization in UTM https://getutm.app) and then execute the x86/amd64 docker image with Rosetta and I am still amazed that this works.

Using Blender for 3D Animation and Modelling

The first long-term support of the 3.x series is here!
Discover Blender 3.3 LTS

So this is great news. Blender has been a staple of Open Source software for as long as I can remember and you can basically do everything with it. It has a steep learning curve, so you need to put in some time and effort but it really is an amazing piece of software. See their blog-post on how Blender was used to create the effects of the recent Bollywood blockbuster “RRR”.

A gatekeeper for me was always GPU acceleration. I never really had a personal computer that is that powerful or in recent time even has had a dedicated GPU, but with Blender 3 they offer GPU acceleration using Apple’s Metal framework and this is a Gamechanger even if you are just on a M1 Macbook Air, as a I am.

So to dive right in I chose the now famous Blender donut tutorial series to practice my skills and oh boy this is fun. With all the scripting support it also is an incredible tool to do datavisualization and I can’t wait to spend more time with it.

As for donuts, I just find it amazing that I basically created this out of thin air. Here’s my take:

Pytorch on Apple Silicon

In collaboration with the Metal engineering team at Apple, we are excited to announce support for GPU-accelerated PyTorch training on Mac.

It’s about time. It has been 18 months since the first M1 chips shipped and finally we got some support for the advanced GPUs on these chips. Being able to leverage your own local machine for simple Pytorch tasks and even just doing local testing of bigger projects is of tremendous value. One thing I am still confused about is that all of this technology still just uses the ‘standard’ GPU cores and there is still no way to access the custom Neural Engine of the chip. This would make inference on device even better. But it’s up to Apple to give access to their internal APIs.

Swift for Tensorflow

Swift for TensorFlow is a next-generation platform for machine learning, incorporating the latest research across: machine learning, compilers, differentiable programming, systems design, and beyond. This project is at version 0.2; it is neither feature-complete nor production-ready. But it is ready for pioneers to try it for your own projects, give us feedback, and help shape the future!

This is truly exciting, and sooner or later I have to brush up on my swift skills. Take a look at the video above and see what the future of ML research holds. Automatic differentiation looks amazing.
How to delete Time Machine snapshots on your Mac

Nice post by Glenn Fleishman about how to delete Time Machine snapshots on Mac. Ever since APFS, Timemachine snapshots are automatically generated and deleted if Hard Disk space becomes rare. However if you want to manage that yourself (as most powerusers tend to want) here’s your solution.