Logo  
february 2025
 

A LLM Made in Portugal

Announced by the Portuguese Prime Minister Luís Montenegro last November, Amália, the Portuguese Artificial Intelligence (AI) model, is set to be launched in 2026, with its first beta version expected by the end of the first quarter of 2025 and the full version available 18 months later. To innovate in the Portuguese language and preserve our heritage while using our culture in the service of innovation, the Portuguese government will invest €5.5 million to address the challenges posed by ChatGPT and the need to adapt this technology, particularly due to the impact of chatbots, most of which are trained in English.

While there is enthusiasm surrounding the creation of a Portuguese LLM, we must also be mindful of certain vulnerabilities and the risks associated with its everyday use. There are some aspects to consider when using these tools: we should pay close attention to the quality of the data, as the performance of the LLM depends heavily on it. Malicious actors could manipulate data, and introducing incorrect information may compromise results. Rigorous data validation processes and regular audits can help identify and correct potential issues.

Here are a few tips to bear in mind when using an LLM:

Intro
 
 
1. Protect Your Personal Data
  • Avoid sharing sensitive information: Never provide personal, financial, or unique identifiers such as your tax identification number.
  • Use generic examples or anonymisation: If you need to illustrate a case, replace real names with fictitious ones and remove identifying details.

LLMs temporarily store information and may sometimes retain or reproduce fragments of sensitive data. Keeping your interactions generic minimises the risk of exposing private data.

 
 
2. Validate Responses and Verify Sources
  • Question the information: Even if the responses seem credible, confirm them through external sources or subject-matter experts.
  • Be aware of possible inaccuracies: LLMs may provide outdated data or contain contextual errors, especially if they rely on old or incomplete information.

Technology evolves rapidly, and models may have been trained on limited or biased data. Comparing the information obtained with reliable references helps filter errors and promotes more responsible use.

 
 
3. Provide Clear and Contextualised Examples
  • Define your objective: Clearly explain what you want — whether it's a summary, a critical analysis, or simply a code example.
  • Include relevant details: If you're requesting a technical solution, specify the programming language and version you're using. If you need a summary, mention the target audience or desired length.

A well-defined request helps the LLM better understand what is needed, resulting in more accurate responses and reducing the need for repeated adjustments.

 
 
4. Know the Limits and Biases
  • Understand the model's dataset: Check whether the LLM was trained during a specific timeframe and, if possible, the volume and diversity of the data sources.
  • Consider potential biases: LLMs may reflect the prejudices and stereotypes present in their training data. It's crucial to read responses critically and be aware that not everything will correspond to reality.

Transparency regarding the model's limitations helps interpret responses cautiously and avoid spreading misinformation or stereotypes.

 
 
5. Contribute to Improvements and Stay Informed
  • Provide constructive feedback: If you detect errors or receive inadequate responses, share details about the issue whenever possible to help improve future developments.
  • Stay informed: Keep up with new LLM versions, read about best practices in security and privacy, and stay aware of trends in Artificial Intelligence.

By contributing to the refinement of these systems, you help build a safer and more efficient community. Continuous learning ensures you remain updated on constant innovations and best practices.

 
 
 

By following these five tips, you can make the most of LLMs’ potential while maintaining an informed, safe, and conscientious approach. Large language models are powerful and versatile, but they require careful use to mitigate risks and maximise benefits.

 

Archive

2025

2024

2023

2022

2021

2020

2019

Subscribe our newsletter.


Cookie Consent X

Devoteam Cyber Trust S.A. uses cookies for analytical and more personalized information presentation purposes, based on your browsing habits and profile. For more detailed information, see our Cookie Policy.