I am Erik, I am 29 and a master's student in AI and researcher at the Cognitive Systems group at the University of Bamberg. My interests are interpretability and explainable AI (xAI). I am currently working on my master's thesis where I explore the latent representations of medical foundation models with the aim to uncover meaningful, interpretable features that can be manipulated and controlled, improving the transparency and fairness of AI systems.
- 🔬 Mechanistic Interpretability: Understanding how neural networks represent knowledge and how those representations can be controlled to improve model transparency.
- 🖌️ Generative Models: Investigating the use of latent space manipulation for steering generative models, with applications to fairness and bias mitigation.
- 🧠 Sparse Autoencoders and Crosscoders: Using sparse autoencoders and crosscoders to induce monosemantic features and simplify latent space manipulation.
- 🩺 AI in Medical Imaging: Applying xAI to medical domains to address bias and underrepresentation in AI-generated datasets.
In an age where AI is transforming industries and more importantly: society, it is important that these systems remain ethical, transparent, and fair. I want to do my part in achieving that.
Always feel free to reach out - I am always interested in collaborating or just exchanging ideas! 🤓
