There is amassive amount of information available from public sources on the internet, and for making business decisions often the limitation is human capacity to process all that information. With the raise in generative Artificial Intelligence and Large Language Models, generating summaries and helping the people looking to make those decisions has become more and more relevant and feasible.

Another topic deservedly getting more and more attention is sustainability of AI. We at Outokumpu are all for sustainability in a number of aspects, and also Sustainable AI is a very important topic for us.

We are asking you to help us create a sustainable AI solution that can utilize public data feeds and provide reliable summaries of recent trends and points of interest to support true AI augmented data driven decision making!


We will have people onsite to answer questions (detailed hours to be communicated later) and ability to reach out for questions via asynchronous means (email or other digital channels). We will also be available for meetings to discuss and answer questions.


Winning team will get 2000€, the first runner up 1000€ and the second runner up 500€.


The selection of the winner will be a combination of subjective evaluation of the technical implementation and quality of outputs by the Outokumpu team. The key evaluation dimensions used for scoring the solutions are:


1.     Trust worthiness of results – hallucinations impede the needed trust and must be avoided, and sources that are reliable should be emphasized. One part of this is providing the references, but also making sure that the answer is based on real facts and is as correct as possible.

2.     Efficiency of the solution – ideally the amount of calculations, queries and consequently energy consumption should be optimized for efficiency and ecological impact.This means that in order to get the best score from the evaluation, you should pay close attention to what model you use and how. It might be easiest to get nice results just by calling for example one of OpenAI’s larger LLMs, but due to the size of the models, their efficiency is probably not the best –especially if you perform multiple queries with big context sizes. Using a smaller model and constructing your queries in a smart way can go a long way towards more efficient solutions.

3.     Comprehensiveness and readability of the results – summarizations must be comprehensive but still easy to understand, and provide references to source data in a usable manner. Ideal length of a response is a 1-2 paragraphs, but this naturally depends on the question. Simple and concise is the goal.

4.     Architecture choices – you should strive towards a structure that allows for interchangeability of parts of the solutions should new approaches, for example more ecological language models, emerge.


Please note that the expectation here is not to train your own LLM – that would also be very resource and energy consuming – but emphasize what model you choose to use and how you use it to be able to produce high quality trustworthy results in an efficient manner. And of course, if you have any questions, please reach out to the Outokumpu team – we are more than happy to answer any questions!


Outokumpu is a global leader in stainlesssteel. We are working towards a world that lasts forever and are proud to be the producer of the most sustainable stainless steel in the world: we are able to help our customers to reduce their carbon footprint. Our business is based on circular economy. In fact, our mills are significant recycling facilities.


We are headquartered in Helsinki, Finland and employ some 8,500 professionals in more than 30 countries around the world. Our people are at the heart of our success. Making sure our people feel welcome and that they are equally heard and have equal opportunities is a key driver of Outokumpu’s business. Read more about what it’s like to work with us at our careers page.


Our AI and Digital Innovation Hub is a unit inside Outokumpu where we focus on driving the utilization of Artificial Intelligence across the company and finding the right solutions to make use of the vast amounts of data we have for true value. We use both modern generative AI technologies like Large Language Models as well as more classic approaches to machine learning while actively collaborating with internal and external stakeholders in a dynamic innovation model geared for generating rapid results.