(chapter 15 in book)

Fintech companies face the challenge of trying to lead in AI adoption while navigating potential pitfalls. The board of directors plays a critical role in demonstrating leadership and building trust with key stakeholders during the implementation of AI.

This research interviewed board members from Fintech companies to identify the most effective strategies for fostering trust among shareholders, staff, and customers. These three groups have different concerns and face different risks from AI. The findings reveal that the most effective methods for building trust differ among these three groups of stakeholders. Leaders should build trust for these three stakeholders in two ways: First, through the effective and trustworthy implementation of AI, and second, by transparently communicating how AI is used in a manner that addresses stakeholders concerns. The practical ways to build trust with the implementation and the communication for these three groups, shareholders, staff, and consumers, are presented in tables 1-3.

The findings show significant overlap between the effective overall implementation and governance of AI. However, several issues are identified that relate specifically to how AI innovations should be communicated to build trust. The findings also indicate that certain applications of Generative AI are more conducive to building trust in AI, even if they are more restrained and limited in scope, and some of Generative AI’s performance may be sacrificed as a result. Thus, there are trade-offs between unleashing Generative AI in all its capacity and a more constrained, transparent, and predictable application that builds trust in customers, staff, and shareholders. This balancing act, between a fast adoption of Generative AI and a more cautious, controlled approach is at the heart of the challenge the board faces.

Leaders and corporate boards must build trust by providing a suitable strategy and an effective implementation, while maintaining a healthy level of scepticism based on an understanding of AI’s limitations. This balance will lead to more stable and sustainable trust.

Table 1. How leaders can build trust in AI with shareholders

Implementation:
1) Use AI in a way that does not increase financial or other risks.
2) Build in-house expertise, don’t rely on one consultant or technology provider.
3) Make new committee focused on the governance of AI and data. Accurately evaluate new risks (compliance etc.).
4) Develop a framework of AI risk that board will use to evaluate and communicate risks from AI implementations. Management should regularly update the framework.
5) Renew board and bring in more technical knowledge and have sufficient competence in AI. Keep up with developments in technology. Ensure all board members understand how Generative AI and traditional AI work.
6) Make the right strategic decisions, and collaboration, for the necessary technology and data (e.g. through APIs etc.).  

Communication:
1) Clear vision on AI use. Illustrate sound business judgement. Showcase the organization’s AI talent.
2) Clear boundaries on what AI does and does not do. Show willingness to enforce these.
3) Illustrate an ability to follow developments: Show similar cases of AI use from competitors, or companies in other areas.
4) If trust is concentrated on specific leaders that will have a smaller influence with the increased use of AI, the trust lost must be re-built.
5) Be transparent about AI risks so shareholders can also evaluate them as accurately as possible.

Table 2. How leaders can build trust in AI with staff

Implementation:
1) Show long term financial commitment to AI initiatives.
2) Encourage mindset of experimentation but with an awareness of the risks such as privacy, data protection laws and ethical behaviour.
3) Involve staff in process of digital transformation. Share new progress and new insights gained to illuminate the way forward.
4) Make AI ethics committee with staff from a variety of seniorities.
5) Give existing staff the necessary skills to effectively utilize Generative AI, rather than hiring new people with technological knowledge that do not know the business. Educate staff on when to not follow, and when to challenge the findings of AI.
6) Key performance indicators (KPIs) need to be adjusted. Some tasks become easier with AI, but the process of digital transformation is time consuming.  

Communication:
1) Communicate a clear coherent, long-term vision, with a clear role for staff. The steps towards that vision should reflect the technological changes, business model changes, and the changes in their roles.
2) Be open and supportive to staff reporting problems, so whistleblowing is avoided.

Table 3. How leaders can build trust in AI with customers

Implementation:
1) Avoid using unsupervised Generative AI to complete tasks on its own.
2) Only use AI with clear transparent processes, and predictable outcomes, to complete tasks on its own.
3) Have clear guidelines on how staff can utilize Generative AI, covering what manual checks they should make.
4) Monitor competition and don’t fall behind in how trust in AI is built.  

Communication:
1) Explain where Generative AI and other AI are used and how.
2) Emphasise the values and ethics of the organization and how they still apply when Generative AI, or other AI, is used.

The authors thank the Institute of Corporate Directors Malaysia for their support, and for featuring this research: https://pulse.icdm.com.my/article/how-leadership-in-financial-organisations-build-trust-in-ai-lessons-from-boards-of-directors-in-fintech-in-malaysia/

References

Zarifis A. & Yarovaya L. (2025) ‘Building Trust in AI: Leadership Insights from Malaysian Fintech Boards’ In Zarifis A. & Cheng X. (eds.) Fintech and the Emerging Ecosystems – Exploring Centralised and Decentralised Financial Technologies, Springer: Cham. https://doi.org/10.1007/978-3-031-83402-8_15 (open access)

(chapter 4 in book)
Central bank digital currencies (CBDC) have been implemented by some countries and trialled by many more. As the name suggests, the fundamental characteristics are that this is money that is digital, without a physical note or coin, and issued by a central bank.
The consumer has an increasing range of financial services to choose from including decentralised blockchain based cryptocurrencies. A CBDC may use blockchain technology, but it is centralized, so the institutions that support it play an important role. While being centralised may reduce some risks, it may inadvertently increase others. Despite the centralised top-down nature of this financial technology, it still needs to be adopted so the consumer’s perspective, particularly their trust in it, is very important. Each CBDC implementation can be different, and each country’s context can be different, therefore it is important to understand each case separately.
This research models the Brazilian consumer’s trust in their two-tier CBDC, where the central bank and the retail banks retain their current role (Zarifis and Cheng, 2025). This implementation is not a one tier solution where retail banks are bypassed in some ways, and the citizen interacts mostly with the central bank.
Existing research that identified six ways to build trust in a different CBDC (Zarifis and Cheng, 2024) was used as a basis. This research tested a model with one additional way to build trust, but this additional way to build trust was not supported. The seventh hypothesized way that is not supported is that the implementation process, including pilot implementations, would build trust. Therefore, despite the differences in the Brazilian CBDC, the original model applies here also which suggests the model applies for both two-tier solutions, and mixed one and two-tier solutions.

Figure 1. Model of consumer trust in Brazil’s two-tier CBDC, adapted from (Zarifis and Cheng 2024)

Three institutional, and three technological factors, are found to play a role. The six ways to build trust that are supported are: (a) Trust in government and central bank offering the CBDC, (b) expressed guarantees for those using it, (c) the favourable reputation of other active CBDCs, (d) the CBDC technology, the automation and limited human involvement necessary, (e) the trust building features of the CBDC wallet app, and (f) the privacy features of the CBDC wallet app and back-end processes.
It is important to develop user centered services in Brazil so that trust is built in the services themselves, and the government institutions that deliver them, sufficiently for broad adoption.

References
Zarifis A. & Cheng X. (2024) ‘The six ways to build trust and reduce privacy concern in a Central Bank Digital Currency (CBDC)’. In Zarifis A., Ktoridou D., Efthymiou L. & Cheng X. (ed.) Business digital transformation: Selected cases from industry leaders, London: Palgrave Macmillan, pp.115-138. https://doi.org/10.1007/978-3-031-33665-2_6 (open access)

Zarifis A. & Cheng X. (2025) ‘A model of trust in Central Bank Digital Currency (CBDC) in Brazil: How trust in a two-tier CBDC with both the central and retail banks involved changes consumer trust’ In Zarifis A. & Cheng X. (eds.) Fintech and the Emerging Ecosystems – Exploring Centralised and Decentralised Financial Technologies, Springer: Cham. https://doi.org/10.1007/978-3-031-83402-8_4 (open access)

Generative AI (GenAI) has seen explosive growth in adoption. However, the consumer’s perspective in its use for financial advice is unclear. As with other technologies that are used in processes that involve risk, trust is one of the challenges that need to be overcome. There are personal information privacy concerns as more information is shared, and the ability to process personal information increases.

While the technology has made a breakthrough in its ability to offer financial insight, there are still challenges from the users’ perspective. Firstly there is a wide variety of different financial questions that are asked by the user. A user’s financial questions may be specific such as ‘does stock X usually give a higher dividend than stock Y’, or vague, such as ‘how can my investments make me happier’. Financial decisions often have far reaching, long term implications.

Figure 1. Model of building trust in advise given by Generative AI, when answering financial questions

This research identified four methods to build trust in Generative AI in both of the scenarios, specific and vague financial questions, and one method that only works for vague questions. Humanness has a different effect on trust in the two scenarios. When a question is specific, humanness does not increase trust, while (1) when a question is vague, human-like Generative AI increases trust. The four ways to build trust in both scenarios are: (2) Human oversight and being in the loop, (3) transparency and control, (4) accuracy and usefulness, and finally (5) ease of use and support. For the best results all the methods identified should be used together to build trust. These variables can provide the basis for guidelines to organizations in finance utilizing Generative AI.

A business providing Generative AI for financial decisions must be clear what it is being used for. For example analysing past financial performance to attempt to predict future performance is very different to analysing social media activity. The advise of Generative AI needs to feel like a fully integrated part of the financial community, not just a system. Trust must be built sufficiently to overcome the perceived risk. The findings suggest that the consumer will not follow the ‘pied piper’ blindly, however alluring ‘their song’ of automation and efficiency is.

Reference
Zarifis A. & Cheng X. (2024) ‘How to build trust in answers given by Generative AI for specific, and vague, financial questions’, Journal of Electronic Business & Digital Economics, pp.1-15. https://doi.org/10.1108/JEBDE-11-2023-0028 (open access)

Cryptocurrencies’ popularity is growing despite short-term fluctuations. Peer-reviewed research into trust in cryptocurrency payments started in 2014 (Zarifis et al., 2014, 2015). While the model created then is based on proven theories from psychology, and supported by empirical research, a-lot has changed in the past 10 years. This research re-evaluates and extends the first model of trust in cryptocurrencies and delivers the second extended model of consumer trust in cryptocurrencies CRYPTOTRUST 2 (Zarifis & Fu, 2024) as seen in figure 1.

Figure 1: The second extended model of consumer trust in cryptocurrencies (CRYPTOTRUST 2)
Trust in a cryptocurrency is a multifaceted issue. While some believe that the consumer does not need to trust cryptocurrencies because they utilize blockchain, most people appreciate that you must trust cryptocurrencies, just as you must trust any other technology you use that involves some risk.

The first three variables of the model come from the individual’s psychology: Personal innovativeness is divided into (1) personal innovativeness in technology and (2) personal innovativeness in finance. These two influence (3) personal disposition to trust.

There are then six variables that come from the specific context, and not the person’s psychology: The first three are related to the cryptocurrency itself. These are (4) the stability in the cryptocurrency value, (5) the transaction fees and (6) reputation. Institutional trust is shaped by (7) regulation and (8) payment intermediaries that may be involved in fulfilling the transaction. The last contextual factor is (9) trust in the retailer. The six variables from the context influence (10) trust in the cryptocurrency payment which then, finally, influences (11) the likelihood of making the cryptocurrency payment.

Separating personal innovativeness to personal innovativeness in (1) technology and (2) finance, is a useful distinction as some consumers may have different levels of personal innovativeness for technology and finance. The analysis here supports that these are separate constructs.

This research shows that trust in cryptocurrencies has not changed fundamentally, but it has evolved. All the main actors in the value chain still play a role in building trust. There is more emphasis from the consumer on having a stable value and low transaction fees. This may be because consumers now have more experience with cryptocurrencies, and they are better informed. It may also be because there are more cryptocurrencies available, and other alternatives such as Central Bank Digital Currencies (CBDC), so consumers can review the many alternatives and try to identify the best one.

References

Zarifis A., Cheng X., Dimitriou S. & Efthymiou L. (2015) ‘Trust in digital currency enabled transactions model’, Proceedings of the Mediterranean Conference on Information Systems (MCIS), pp.1-8. https://aisel.aisnet.org/mcis2015/3/

Zarifis A., Efthymiou L., Cheng X. & Demetriou S. (2014) ‘Consumer trust in digital currency enabled transactions’, Lecture Notes in Business Information Processing-Springer, vol.183, pp.241-254. https://doi.org/10.1007/978-3-319-11460-6

Zarifis A. & Fu S. (2024) ‘The second extended model of consumer trust in cryptocurrency payments, CRYPTOTRUST 2’, Frontiers in Blockchain, vol.7, pp.1-11. https://doi.org/10.3389/fbloc.2024.1220031 (open access)

Financial technology often referred to as Fintech, and sustainability are two of the biggest influences transforming many organizations. However, not all organizations move forward on both with the same enthusiasm. Leaders in Fintech do not always prioritize operating in a sustainable way. It is, therefore, important to find the synergies between Fintech and sustainability.

One important aspect of this transformation many organizations are going through is the consumersʹ perspective, particularly the trust they have, their personal information privacy concerns, and the vulnerability they feel. It is important to clarify whether leadership in Fintech, with leadership in sustainability, is more beneficial than leadership in Fintech on its own.

This research evaluates consumers’ trust, privacy concerns, and vulnerability in the two scenarios separately and then compares them. Firstly, this research seeks to validate whether leadership in Fintech influences trust in Fintech, concerns about the privacy of personal information when using Fintech, and the feeling of vulnerability when using Fintech. It then compares trust, privacy concerns and vulnerability in two scenarios, one with leadership in both Fintech and sustainability, and one with leadership just in Fintech without sustainability.

Figure 1. Leadership in Fintech, trust, privacy and vulnerability, with and without sustainability

The findings show that, as expected, leadership in both Fintech and sustainability builds trust more, which in turn reduces vulnerability more. Privacy concerns are lower when sustainability leadership and Fintech leadership come together; however, their combined impact was not found to be sufficiently statistically significant. So contrary to what was expected, privacy concerns are not reduced more effectively when there is leadership in both together.

The findings support the link between sustainability in the processes of a Fintech and being successful. While the limited research looking at Fintech and sustainability find support for the link between them by taking a ‘top‐down’ approach and evaluating Fintech companies against benchmarks such as economic value, this research takes a ‘bottom‐up’ approach by looking at how Fintech services are received by consumers.

An important practical implication of this research is that even when there is sufficient trust to adopt and use Fintech, the consumer often still feels a sense of vulnerability. This means the leaders in Fintech must not just think about how to do enough for the consumer to adopt their service, but they should go beyond that and try to build trust and reduce privacy concerns to the degree that the consumer’s belief that they are vulnerable is also reduced.

These findings can inform a Fintech’s business model and the services it offers consumers.

Reference

Zarifis A. (2024) ‘Leadership in Fintech builds trust and reduces vulnerability more when combined with leadership in sustainability’, Sustainability, 16, 5757, pp.1-13. https://doi.org/10.3390/su16135757 (open access)

Featured by FinTech Scotland: https://www.fintechscotland.com/leadership-in-fintech-builds-trust-and-reduces-vulnerability/

A Non-Fungible Token, usually referred to by its acronym NFT, uses technology that involves data on a blockchain that cannot be changed after they have been added. Therefore, while they share similar blockchain technology with cryptocurrencies, the functionality is different. NFTs’ functionality enables them to be used to prove ownership of an intangible-digital, or tangible-physical, asset, and the associated rights the owner has. The most popular practical application of NFTs for digital assets is proving ownership of digital art, virtual items in computer games, and music.

The unique features of NFTs are becoming increasingly appealing as we spend more of our time online. Despite this increased popularity there is a lack of clarity over the final form this digital asset will take. The purchasing process in particular needs to be clarified.

This research developed a model of the purchasing process of NFTs and the role of trust in this process. The model identified that the purchasing process of NFTs has four stages and each stage requires trust.
You can see in the figure, the four stages in the purchasing process on the left, and the trust required in each of these stages along the center. Finally, on the right you see that trust in all four stages leads to trust in an NFT purchase.

Figure 1. Model of consumer trust at each stage of the NFT purchasing process

The four stages of the purchase are: First, set up a cryptocurrency wallet to pay for the NFT, and to be able to receive it. Second purchase cryptocurrency with the cryptocurrency wallet, third use the cryptocurrency wallet to pay for an NFT on an NFT marketplace and finally, there is the fourth, after sales service that may involve returns, or some other form of support.

The model that is supported by our analysis identified four stages to trust: First trust in the cryptocurrency wallet, second trust in the cryptocurrency purchase, third trust in the NFT marketplace, and fourth trust in after-sales services and resolving disputes.

Reference

Zarifis, A. & Castro, L.A. (2022) ‘The NFT purchasing process and the challenges to trust at each stage’, Sustainability, vol.14, no.24:16482, pp.1-13. Available from (open access): https://doi.org/10.3390/su142416482

New Fintech and Insurtech services are popular with consumers as they offer convenience, new capabilities and in some cases lower prices. Consumers like these technologies but do they trust them? The role of consumer trust in the adoption of these new technologies is not entirely understood. From the consumer’s perspective, there are some concerns due to the lack of transparency these technologies can have. It is unclear if these systems powered by artificial intelligence (AI) are trusted, and how many interactions with consumers they can replace. There have been several adverts recently that emphasize that their company will not force you to communicate with AI and will provide a real person to communicate with are evidence of some push-back by consumers. Even pioneers of AI like Google are offering more opportunities to talk to a real person an indirect acknowledgment that some people do not trust the technology. Therefore, this research attempts to shed light on the role of trust in Fintech and Insurtech, especially if trust in AI in general and trust in the specific institution play a role (Zarifis & Cheng, 2022).

Figure 1. A model of trust in Fintech/Insurtech

This research validates a model, illustrated in figure 1, that identifies the four factors that influence trust in Fintech and Insurtech. As with many other models of human behavior, the starting point is the individual’s psychology and the sociology of their environment. Then, the model separates trust in a specific organization and trust in a specific technology like AI. This is an important distinction: Consumers have beliefs about the organization they bring with them and other pre-existing beliefs on AI. Their beliefs on AI might have been shaped by experiences with other organizations.

Therefore, the validated model shows that trust in Fintech or Insurtech is formed by the (1) individual’s psychological disposition to trust, (2) sociological factors influencing trust, (3) trust in either the financial organization or the insurer and (4) trust in AI and related technologies.

This model was initially tested separately for Fintech and Insurtech. In addition to validating a model for trust in Fintech and Insurtech separately, the two models were compared to see if they are equally valid or different. For example, if one variable is more influential in one of the two models, this would suggest that the model of trust in one of them is not the same as in the other. The results of the multigroup analysis show that the model is indeed equally valid for Fintech and Insurtech. Having a model of trust that is suitable for both Fintech and Insurtech is particularly useful as these services are often offered by the same organization, or even the same mobile application side by side.

Reference

Zarifis A. & Cheng X. (2022) ‘A model of trust in Fintech and trust in Insurtech: How Artificial Intelligence and the context influence it’, Journal of Behavioral and Experimental Finance, vol. 36, pp. 1-20. Available from (open access): https://doi.org/10.1016/j.jbef.2022.100739