5 questions to...
Gianmarco Ciarfaglia

Interview to the Senior Manager AI & Advanced Analytics of Engineering.

Since 2023, Gianmarco Ciarfaglia has been Senior Manager of the AI & Advanced Analytics area within the AI & Data Technology Business Line at Engineering Group.

The AI & Advanced Analytics team brings together Engineering’s expertise in AI, Generative AI, machine learning, operations research, and applied mathematics more broadly. The team includes around 50 professionals and delivers over 30 projects annually.

In 2018, Gianmarco launched and led the development of Engineering’s proprietary large language model, EngGPT, which is now used by several major clients and stands as a key benchmark for Private GenAI in the Italian landscape.

Previously, Gianmarco held roles of increasing responsibility in data and AI across major Italian and international companies, including BNL and EY.

1. BEYOND OPERATIONAL EFFICIENCY: WHAT ARE THE MOST EXCITING INNOVATIONS THAT AI IS BRINGING TO COMPANIES AND ORGANIZATIONS?

We’re currently seeing a strong focus among our clients on operational efficiency as the primary short-term benefit this technology can deliver. However, that’s just the tip of the iceberg. What lies beneath is a shift that is set to be far more transformative and disruptive than merely incremental or performance-enhancing.

One of the most exciting areas of innovation is the creation of high-quality, AI-generated content, where the barriers to entry have dropped dramatically. Think of the new image generation and editing tools recently released: they make it possible to design entire advertising campaigns through a simple chat interface, using artistic styles that were once exclusive to large-scale productions.

Another compelling use case is the development of highly specialized code, even in niche languages and technologies mastered by few in terms of syntax, yet now accessible to many with the support of AI.

We’re also witnessing the rise of a truly innovative capability: natural, human-like interaction. These tools not only respond faster than a human possibly could, but they also convey empathy and adapt to the user's tone and emotional state, closely mirroring real human conversation.

Today, large language models (LLMs) no longer need to rely solely on written text converted into static audio. They can now listen and speak natively, capturing emotional nuance in real time, including things like breathing, fillers, and even laughter. This opens the door to solutions such as AI-powered call centers, capable not only of infinite patience but also, by design, always ready to listen and support users with empathy and contextual relevance.

2. WHAT ARE THE MAIN CHALLENGES INTEGRATING AI AND ADVANCED DATA ANALYTICS INTO EXISTING BUSINESS PROCESSES AND SYSTEMS?

The adoption and integration of AI-related technologies is a process that impacts the entire organization and must be approached as a true transformation, where technology is just one side of the coin. The other, equally essential element is people: the company culture must embrace this new way of working and creating value in everyday operations.

In this perspective, it’s crucial that all business units, at different levels of depth depending on their roles, are empowered both culturally and technically to explore how AI can reshape the way business is done. Large enterprises are beginning to move from experimentation to concrete implementation, launching major transformation projects, at least in the areas most visible to the outside world.

For example, AI is revolutionizing customer support, with chat-based interaction acting as a kind of “democratic” entry point for generative AI technologies. In these domains, once the goal of greater efficiency is achieved, the ongoing challenge becomes maintaining a consistently high level of perceived quality for the end users of the service.

A final, broader challenge, one that in many ways goes beyond AI, is the ability of organizations to keep pace with the rapid evolution of the tech landscape, a pace that AI has significantly accelerated. It is now evident that change driven by new technologies is no longer confined to specific moments in time: it has become a continuous state of mind, one that companies must not simply endure, but actively embrace as an essential opportunity.

3. AGENTIC AI IS CURRENTLY ONE OF THE MOST EXCITING AI FRAMEWORKS: WHAT SCENARIOS DOES IT OPEN UP FOR BUSINESS?

In both the Italian and global tech markets, artificial intelligence is becoming an increasingly pervasive technology, particularly in terms of its interoperability with third-party systems through specialized agents.

When it comes to AI Agents, we are witnessing a paradigm shift: moving from a technology initially designed as a passive support, triggered by human-initiated, well-defined tasks, to an AI capable of understanding context, autonomously initiating certain actions, and most importantly, leveraging a wide range of tools.

For instance, some AI systems now widely available are able to conduct scientific research by mimicking evidence-based deductive reasoning, starting from a predefined objective, exploring academic literature, formulating and challenging hypotheses, revisiting approaches when needed, and ultimately providing well-reasoned answers backed by data and verified sources.

Other AI systems can be designed not just to translate code from one language to another, but to translate entire systems, considering relationships between components, the downstream effects of changes, planning and executing test strategies, and ensuring overall system integrity.

That said, a Human-in-the-Loop approach remains essential: AI does not operate in complete autonomy, and human oversight is crucial, especially during the most critical decision-making phases.

In both examples, we are moving away from a model of AI that provides instant answers based on reasoning not disclosed to the user, toward systems that process a task, gather traceable and verifiable information, and deliver well-structured, transparent responses. While these are still evolving technologies, they already offer substantial added value, boosting efficiency and significantly reducing time-to-solution.

4. WHAT ADVANTAGES CAN THE ADOPTION OF A PRIVATE GENERATIVE AI SOLUTION OFFER TO COMPANIES AND PUBLIC ADMINISTRATIONS?

Private Generative AI is a concept that, at least under these specific terms, we originally coined here at Engineering. It refers to the ability to implement Generative AI solutions and fully benefit from their capabilities, without compromising on the complete privacy of the data being used.

This means, for example, that the data used to train or fine-tune the reference Large Language Model, the user queries during interaction, the data extracted from integrated systems, and all related outputs can remain entirely within the customer’s perimeter, whether that’s a private cloud or even on-premises infrastructure.

What’s more, a Private Generative AI solution is purpose-built to serve only the organization it was designed for, and can therefore be fully customized to ensure optimal performance for the specific business use case. Designing a private AI doesn’t mean finding the right prompt or prompt sequence to get the desired result: it means teaching the AI the language of your business, enabling it to adapt and respond appropriately on its own.

Finally, unless strictly required by the use case, a Private Generative AI system is not exposed to the public web in any way, significantly reducing the client's cybersecurity risk. The threat of cyberattacks targeting LLMs is still often underestimated, yet it’s more relevant than ever.

At Engineering, we have the capability to develop complex Generative AI systems from the ground up, built on proprietary Large Language Models, such as our own EngGPT, and deploy them directly on the customer’s dedicated hardware. Depending on the use case, this does not necessarily imply massive infrastructure costs.

In fact, we offer lightweight versions of EngGPT suitable for basic Retrieval-Augmented Generation (RAG) tasks, which can even be installed on an iPhone, effectively extending the concept of Private GenAI from "on-premise" to "on-edge-device". One of our key goals is to make the implementation of Private Generative AI solutions lean, efficient, and fully optimized for time-to-market.

When we talk about implementing a use case with EngGPT, we’re not simply referring to the deployment of an LLM, but to a comprehensive framework of accelerators that help us deliver Private GenAI solutions faster and more effectively.

Some of these include:

  • EngGPT Data Insights – connects to large structured databases and enables natural language-driven data analysis;
  • EngGPT Docs – allows users to interact with extensive collections of textual documents;
  • EngGPT Vision – enables multimodal capabilities by incorporating images alongside text as input;
  • EngGPT Data Quality – extends data quality assurance to the semantic content within free-text fields;
  • EngGPT Coding – supports documentation and migration of application code.

These tools are all developed in-house and are continuously refined through our hands-on experience with Private Generative AI projects.

5. AI AND BIAS: HOW TO ENSURE FAIR AND TRANSPARENT AI DEVELOPMENT?

The design and implementation of a responsible AI system impacts the entire lifecycle of any AI-related project. The first step often involves analyzing and evaluating the data used to train the models, as this data may provide a partial or biased representation of reality.

Some methods to rebalance models include generating synthetic data to reduce bias, or intelligently selecting a new subset of the available data to ensure fair representation of different groups within the dataset.

There are also interventions that can be applied during the model-building phase itself. For example, a model can be designed to produce comparable error rates across different data subgroups, or its objective function can be calibrated to place greater emphasis on segments of the data that are known to be underrepresented.

In this way, specific techniques are applied at multiple stages to ensure the model adheres to the principles of Responsible AI.

This focus was central to the development of our Private Generative AI solution, EngGPT, and continues to guide its evolution. Given that this is a generative technology, it was essential to build a large training dataset grounded in the values of Responsible AI, as well as a similarly aligned test dataset designed specifically to evaluate and refine the model.

Finally, we implemented an additional layer of safety measures outside the Large Language Model itself: a series of AI-based safety layers, trained exclusively to detect and prevent harmful or inappropriate behavior based on the specific use case.

 

It’s crucial that all business units, at different levels of depth depending on their roles, are empowered both culturally and technically to explore how AI can reshape the way business is done.

Gianmarco Ciarfaglia Senior Manager AI & Advanced Analytics of Engineering