(chapter 15 in book)

Fintech companies face the challenge of trying to lead in AI adoption while navigating potential pitfalls. The board of directors plays a critical role in demonstrating leadership and building trust with key stakeholders during the implementation of AI.

This research interviewed board members from Fintech companies to identify the most effective strategies for fostering trust among shareholders, staff, and customers. These three groups have different concerns and face different risks from AI. The findings reveal that the most effective methods for building trust differ among these three groups of stakeholders. Leaders should build trust for these three stakeholders in two ways: First, through the effective and trustworthy implementation of AI, and second, by transparently communicating how AI is used in a manner that addresses stakeholders concerns. The practical ways to build trust with the implementation and the communication for these three groups, shareholders, staff, and consumers, are presented in tables 1-3.

The findings show significant overlap between the effective overall implementation and governance of AI. However, several issues are identified that relate specifically to how AI innovations should be communicated to build trust. The findings also indicate that certain applications of Generative AI are more conducive to building trust in AI, even if they are more restrained and limited in scope, and some of Generative AI’s performance may be sacrificed as a result. Thus, there are trade-offs between unleashing Generative AI in all its capacity and a more constrained, transparent, and predictable application that builds trust in customers, staff, and shareholders. This balancing act, between a fast adoption of Generative AI and a more cautious, controlled approach is at the heart of the challenge the board faces.

Leaders and corporate boards must build trust by providing a suitable strategy and an effective implementation, while maintaining a healthy level of scepticism based on an understanding of AI’s limitations. This balance will lead to more stable and sustainable trust.

Table 1. How leaders can build trust in AI with shareholders

Implementation:
1) Use AI in a way that does not increase financial or other risks.
2) Build in-house expertise, don’t rely on one consultant or technology provider.
3) Make new committee focused on the governance of AI and data. Accurately evaluate new risks (compliance etc.).
4) Develop a framework of AI risk that board will use to evaluate and communicate risks from AI implementations. Management should regularly update the framework.
5) Renew board and bring in more technical knowledge and have sufficient competence in AI. Keep up with developments in technology. Ensure all board members understand how Generative AI and traditional AI work.
6) Make the right strategic decisions, and collaboration, for the necessary technology and data (e.g. through APIs etc.).  

Communication:
1) Clear vision on AI use. Illustrate sound business judgement. Showcase the organization’s AI talent.
2) Clear boundaries on what AI does and does not do. Show willingness to enforce these.
3) Illustrate an ability to follow developments: Show similar cases of AI use from competitors, or companies in other areas.
4) If trust is concentrated on specific leaders that will have a smaller influence with the increased use of AI, the trust lost must be re-built.
5) Be transparent about AI risks so shareholders can also evaluate them as accurately as possible.

Table 2. How leaders can build trust in AI with staff

Implementation:
1) Show long term financial commitment to AI initiatives.
2) Encourage mindset of experimentation but with an awareness of the risks such as privacy, data protection laws and ethical behaviour.
3) Involve staff in process of digital transformation. Share new progress and new insights gained to illuminate the way forward.
4) Make AI ethics committee with staff from a variety of seniorities.
5) Give existing staff the necessary skills to effectively utilize Generative AI, rather than hiring new people with technological knowledge that do not know the business. Educate staff on when to not follow, and when to challenge the findings of AI.
6) Key performance indicators (KPIs) need to be adjusted. Some tasks become easier with AI, but the process of digital transformation is time consuming.  

Communication:
1) Communicate a clear coherent, long-term vision, with a clear role for staff. The steps towards that vision should reflect the technological changes, business model changes, and the changes in their roles.
2) Be open and supportive to staff reporting problems, so whistleblowing is avoided.

Table 3. How leaders can build trust in AI with customers

Implementation:
1) Avoid using unsupervised Generative AI to complete tasks on its own.
2) Only use AI with clear transparent processes, and predictable outcomes, to complete tasks on its own.
3) Have clear guidelines on how staff can utilize Generative AI, covering what manual checks they should make.
4) Monitor competition and don’t fall behind in how trust in AI is built.  

Communication:
1) Explain where Generative AI and other AI are used and how.
2) Emphasise the values and ethics of the organization and how they still apply when Generative AI, or other AI, is used.

The authors thank the Institute of Corporate Directors Malaysia for their support, and for featuring this research: https://pulse.icdm.com.my/article/how-leadership-in-financial-organisations-build-trust-in-ai-lessons-from-boards-of-directors-in-fintech-in-malaysia/

References

Zarifis A. & Yarovaya L. (2025) ‘Building Trust in AI: Leadership Insights from Malaysian Fintech Boards’ In Zarifis A. & Cheng X. (eds.) Fintech and the Emerging Ecosystems – Exploring Centralised and Decentralised Financial Technologies, Springer: Cham. https://doi.org/10.1007/978-3-031-83402-8_15 (open access)

Generative AI (GenAI) has seen explosive growth in adoption. However, the consumer’s perspective in its use for financial advice is unclear. As with other technologies that are used in processes that involve risk, trust is one of the challenges that need to be overcome. There are personal information privacy concerns as more information is shared, and the ability to process personal information increases.

While the technology has made a breakthrough in its ability to offer financial insight, there are still challenges from the users’ perspective. Firstly there is a wide variety of different financial questions that are asked by the user. A user’s financial questions may be specific such as ‘does stock X usually give a higher dividend than stock Y’, or vague, such as ‘how can my investments make me happier’. Financial decisions often have far reaching, long term implications.

Figure 1. Model of building trust in advise given by Generative AI, when answering financial questions

This research identified four methods to build trust in Generative AI in both of the scenarios, specific and vague financial questions, and one method that only works for vague questions. Humanness has a different effect on trust in the two scenarios. When a question is specific, humanness does not increase trust, while (1) when a question is vague, human-like Generative AI increases trust. The four ways to build trust in both scenarios are: (2) Human oversight and being in the loop, (3) transparency and control, (4) accuracy and usefulness, and finally (5) ease of use and support. For the best results all the methods identified should be used together to build trust. These variables can provide the basis for guidelines to organizations in finance utilizing Generative AI.

A business providing Generative AI for financial decisions must be clear what it is being used for. For example analysing past financial performance to attempt to predict future performance is very different to analysing social media activity. The advise of Generative AI needs to feel like a fully integrated part of the financial community, not just a system. Trust must be built sufficiently to overcome the perceived risk. The findings suggest that the consumer will not follow the ‘pied piper’ blindly, however alluring ‘their song’ of automation and efficiency is.

Reference
Zarifis A. & Cheng X. (2024) ‘How to build trust in answers given by Generative AI for specific, and vague, financial questions’, Journal of Electronic Business & Digital Economics, pp.1-15. https://doi.org/10.1108/JEBDE-11-2023-0028 (open access)

New Fintech and Insurtech services are popular with consumers as they offer convenience, new capabilities and in some cases lower prices. Consumers like these technologies but do they trust them? The role of consumer trust in the adoption of these new technologies is not entirely understood. From the consumer’s perspective, there are some concerns due to the lack of transparency these technologies can have. It is unclear if these systems powered by artificial intelligence (AI) are trusted, and how many interactions with consumers they can replace. There have been several adverts recently that emphasize that their company will not force you to communicate with AI and will provide a real person to communicate with are evidence of some push-back by consumers. Even pioneers of AI like Google are offering more opportunities to talk to a real person an indirect acknowledgment that some people do not trust the technology. Therefore, this research attempts to shed light on the role of trust in Fintech and Insurtech, especially if trust in AI in general and trust in the specific institution play a role (Zarifis & Cheng, 2022).

Figure 1. A model of trust in Fintech/Insurtech

This research validates a model, illustrated in figure 1, that identifies the four factors that influence trust in Fintech and Insurtech. As with many other models of human behavior, the starting point is the individual’s psychology and the sociology of their environment. Then, the model separates trust in a specific organization and trust in a specific technology like AI. This is an important distinction: Consumers have beliefs about the organization they bring with them and other pre-existing beliefs on AI. Their beliefs on AI might have been shaped by experiences with other organizations.

Therefore, the validated model shows that trust in Fintech or Insurtech is formed by the (1) individual’s psychological disposition to trust, (2) sociological factors influencing trust, (3) trust in either the financial organization or the insurer and (4) trust in AI and related technologies.

This model was initially tested separately for Fintech and Insurtech. In addition to validating a model for trust in Fintech and Insurtech separately, the two models were compared to see if they are equally valid or different. For example, if one variable is more influential in one of the two models, this would suggest that the model of trust in one of them is not the same as in the other. The results of the multigroup analysis show that the model is indeed equally valid for Fintech and Insurtech. Having a model of trust that is suitable for both Fintech and Insurtech is particularly useful as these services are often offered by the same organization, or even the same mobile application side by side.

Reference

Zarifis A. & Cheng X. (2022) ‘A model of trust in Fintech and trust in Insurtech: How Artificial Intelligence and the context influence it’, Journal of Behavioral and Experimental Finance, vol. 36, pp. 1-20. Available from (open access): https://doi.org/10.1016/j.jbef.2022.100739

The volatile times we live in create many real and perceived risks for people (Zarifis et al. 2022). This makes trust harder to build and maintain. Since the turn of the century, the more impersonal nature of many parts of our lives due to the increased reliance on technology made trust harder. So, technology was often a barrier to trust. Now, we no longer necessarily prioritise trust in humans over trust in machines and there are many technologies that support trust.

Trustech is technology that builds and protects user, or consumer, trust. The importance of trust and the technology that supports it, have increased over the years. We are at a point now where a specialised term is needed to represent the technology that supports trust. After similar terms such as Fintech and Insurtech comes Trustech. The use of the term Trustech will focus the minds on this important area.

Dr Alex Zarifis, FHEA

Reference

Zarifis A., Cheng X., Jayawickrama U. & Corsi S. (2022) ‘Can Global, Extended and Repeated Ransomware Attacks Overcome the User’s Status Quo Bias and Cause a Switch of System?’, International Journal of Information Systems in the Service Sector (IJISSS), vol.14, iss.1, pp.1-16. Available from (open access): https://doi.org/10.4018/IJISSS.289219

By Alex Zarifis and the Trust Update team

The capabilities of Artificial Intelligence are increasing dramatically, and it is disrupting insurance and healthcare. In insurance AI is used to detect fraudulent claims and natural language processing is used by chatbots to interact with the consumer. In healthcare it is used to make a diagnosis and plan what the treatment should be. The consumer is benefiting from customized health insurance offers and real-time adaptation of fees. Currently the interface between the consumer purchasing health insurance and AI raises some barriers such as insufficient trust and privacy concerns.

Consumers are not passive to the increasing role of AI. Many consumers have beliefs on what this technology should do. Furthermore, regulation is moving toward making it necessary for the use of AI to be explicitly revealed to the consumer (European Commission 2019). Therefore, the consumer is an important stakeholder and their perspective should be understood and incorporated into future AI solutions in health insurance.

Dr Alex Zarifis discussing Artificial Intelligence at Loughborough University

Recent research  at Loughborough University (Zarifis et al. 2020), identified two scenarios, one with limited AI that is not in the interface, whose presence is not explicitly revealed to the consumer and a second scenario where there is an AI interface and AI evaluation, and this is explicitly revealed to the consumer. The findings show that trust is lower when AI is used in the interactions and is visible to the consumer. Privacy concerns were also higher when the AI was visible, but the difference was smaller. The implications for practice are related to how the reduced trust and increased privacy concern with visible AI are mitigated.

Mitigate the lower trust with explicit AI

The causes are the reduced transparency and explainability. A statement at the start of the consumer journey about the role AI will play and how it works will increase transparency and reinforce trust. Secondly, the importance of trust increases as the perceived risk increases. Therefore, the risks should be reduced. Thirdly, it should be illustrated that the increased use of AI does not reduce the inherent humanness. For example, it can be shown how humans train AI and how AI adopts human values.

Mitigate the higher privacy concerns with explicit AI

The consumer is concerned about how AI will utilize their financial, health and other personal information. Health insurance providers offer privacy assurances and privacy seals, but these do not explicitly refer to the role of AI. Assurances can be provided about how AI will use, share and securely store the information. These assurances can include some explanation of the role of AI and cover confidentiality, secrecy and anonymity. For example, while the consumer’s information may be used to train machine learning it can be made clear that it will be anonymized first. The consumer’s perceived privacy risk can be mitigated by making the regulation that protects them clear.

References

European-Commission (2019). ‘Ethics Guidelines for Trustworthy AI.’ Available from: https://ec.europa.eu/digital

Zarifis A., Kawalek P. & Azadegan A. (2021). ‘Evaluating if Trust and Personal Information Privacy Concerns are Barriers to Using Health Insurance that Explicitly Utilizes AI’, Journal of Internet Commerce, vol.20, pp.66-83. Available from (open access): https://doi.org/10.1080/15332861.2020.1832817