I. Our Mission
We are dedicated to advancing Artificial Intelligence for low-resource contexts. Our research explores multimodality, safety, and sample-efficient learning, while our tools help bridge the data gap for small languages in just a few lines of code.
II. Our Projects

Synglot, our data synthesis library
Our attempt in rising the tide for low-resource languages by making it easy to translate and generate rich, expressive and diverse data.

Challenging Datasets
High-quality data is paramount for effective post-training. We provide novel, previously-unavailable datasets in Macedonian, from mathematics to medical scenarios.

Open-Source Models
We fine-tune open models to see better, and reinforce them to be smarter.
III. Our Philosophy
Large language models are trained and used at an unprecedented scale. As a result, the resource strain for open-source builders and users — be it GPU hours or trillions of tokens — is monumental. We believe much can be achieved with small, local models, by incorporating the latest research advances and doing the boring parts well.
This means:
- experimenting with novel architectures and techniques
- being meticulous about high-quality data
- optimizing for efficiency without sacrificing capability
Safety is paramount in our work. As AI capabilities move into a more agential territory, we find it critical to understanda and prevent misaligned behavior. For us, this means familiarity with the most recent safety research, transparency of our development practices and rigorous testing.
IV. Contribute to our Work
For Researchers & Engineers: If you are interested in any part of the LLM development pipeline, get in touch. We are particularly eager to collaborate with people interested in multimodality and LLM evaluations, as we are seeking to pursue and publish research in these areas.
For Organizations & Institutions: You can support our research by sharing our work with your networks, sponsoring compute credits, or partnering with us partnership opportunities.
V. People

Ilija Lichkovski
Ilija is a physicist turned AI researcher. His research experience spans activation engineering (using a novel method to elicit/inhibit complex behaviors in LLMs), real-world safety of AI agents, and reinforcement learning. During his physics education, he has worked on engineering optical instruments for the European Space Agency's ATHENA space telescope at the Netherlands Insititute for Space Research, as well as researching molecular dynamics associated with a genetic disease using nuclear magnetic resonance spectroscopy at the Zernike Institute for Advanced Materials. Ilija finds it crucial to train models that can exhibit compositional generalization, where models can reliably compose complex functions from simpler ones.

Maja Mishevska
Maja Mishevska is an undergraduate student at Brown University studying Computer Science and Gender Studies. She has a background in traditional software engineering and user interface design, and her current interests lie in the sociotechnical aspects of AI, including human-computer interaction, accountability, and interpretability. Her work has centered on multilingual and cross-cultural web platforms, with an emphasis on public impact tech projects, as well as developing frameworks to evaluate the accessibility of VR and AI-powered tools. Through coursework and research, she is particularly interested in how we can design AI systems that are both technically robust and socially responsible.

Martina Janeva
Martina Janeva is currently pursuing a Master's degree in Data Science & Artificial Intelligence at the University of Antwerp. She has a Bachelor's degree in Computing Science from the University of Groningen. Her work focuses on machine learning, data-centric AI, with a focus on building models that are interpretable and reliable. For her Bachelor’s thesis, she explored learning in the model space, comparing sampling algorithms to explore how machine learning can operate over structured models rather than raw data. More recently, she has been analyzing scientific progress through citation networks, tracing how knowledge evolves across research papers over time. She is especially interested in how models learn, how data influences that process and how we can make AI systems more transparent.

Ivona Najdenkoska, PhD
Ivona Najdenkoska is a Postdoctoral Researcher at the University of Amsterdam, working on multimodal foundation models and the detection of AI-generated content. She completed her PhD at the University of Amsterdam under the supervision of Marcel Worring and Yuki Asano. Her doctoral research explored vision-language learning with focus on designing efficient approaches for multimodal understanding and generative tasks, and was conducted within the MultiX and AIMLab groups. Ivona was also a Research Scientist Intern at Meta in 2023, where she worked on image generation within the GenAI org. She holds a Master’s degree in Artificial Intelligence from KU Leuven and a Bachelor’s degree in Computer Science and Engineering from the University “Ss. Cyril and Methodius” in Skopje. Prior to her academic career, she worked as a Software Engineer at Netcetera.