I previously listed principles published before December 2017 here; this blogpost appends those principles drafted since January 2018 (plus one in October 2017 I had missed). The principles are listed here (in full or abridged) with links, notes and references but without critique.
Scroll down to the next horizontal line for the updates.
If there any (prominent) ones I’ve missed please let me know.
Asimov’s three laws of Robotics (1950)
Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)
I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:
The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)
Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)
Draft principles of The Future Society’s Science, Law and Society Initiative (Oct 2017)
Montréal Declaration for Responsible AI draft principles (Nov 2017)
IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)
A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.
Note that these principles have been revised and extended, in March 2019 (see below).
UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)
Updated principles
Intel’s recommendation for Public Policy Principles on AI (October 2017)
Lords Select Committee 5 core principles to keep AI ethical (April 2018)
AI UX: 7 Principles of Designing Good AI Products (April 2018)
The Toronto Declaration on equality and non-discrimination in machine learning systems (May 2018)The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems does not succinctly articulate ethical principles but instead presents arguments under the following headings to address concerns “about the capability of [machine learning] systems to facilitate intentional or inadvertent discrimination against certain individuals or groups of people”.
Google AI Principles (June 2018)
IBM’s 5 ethical AI principles (September 2018)
Microsoft Responsible bots: 10 guidelines for developers of conversational AI (November 2018)
CEPEJ European Ethical Charter on the use of artificial intelligence (AI) in judicial systems and their environment, 5 principles (February 2019)
Women Leading in AI (WLinAI) 10 recommendations (February 2019)
The NHS’s 10 Principles for AI + Data (February 2019)
IEEE General Principles of Ethical Autonomous and Intelligent Systems (A/IS) (March 2019)
Ethical issues arising from the police use of live facial recognition technology (March 2019)
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Murphy and Wood’s three laws of Responsible Robotics (2009)
- A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
- A robot must respond to humans as appropriate for their roles.
- A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.
These were proposed in Robin Murphy and David Wood’s paper Beyond Asimov: The Three Laws of Responsible Robotics [2].
EPSRC Principles of Robotics (2010)
- Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
- Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy.
- Robots are products. They should be designed using processes which assure their safety and security.
- Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
- The person with legal responsibility for a robot should be attributed.
Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)
I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:
6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.An account of the development of the Asilomar principles can be found here.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14. Shared Benefit: AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)
- Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
- Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
- Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
- Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
- Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
- Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
- Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results.
Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)
- Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity.
- Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.
- Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.
- Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI.
- Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control.
- Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society.
- Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed.
- Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.
- Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.
Draft principles of The Future Society’s Science, Law and Society Initiative (Oct 2017)
- AI should advance the well-being of humanity, its societies, and its natural environment.
- AI should be transparent.
- Manufacturers and operators of AI should be accountable.
- AI’s effectiveness should be measurable in the real-world applications for which it is intended.
- Operators of AI systems should have appropriate competencies.
- The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.
Montréal Declaration for Responsible AI draft principles (Nov 2017)
- Well-being The development of AI should ultimately promote the well-being of all sentient creatures.
- Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
- Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.
- Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.
- Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.
- Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.
- Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.
IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)
- How can we ensure that A/IS do not infringe human rights?
- Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being.
- How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable?
- How can we ensure that A/IS are transparent?
- How can we extend the benefits and minimize the risks of AI/AS technology being misused?
A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.
Note that these principles have been revised and extended, in March 2019 (see below).
UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)
- Demand That AI Systems Are Transparent
- Equip AI Systems With an Ethical Black Box
- Make AI Serve People and Planet
- Adopt a Human-In-Command Approach
- Ensure a Genderless, Unbiased AI
- Share the Benefits of AI Systems
- Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights
- Establish Global Governance Mechanisms
- Ban the Attribution of Responsibility to Robots
- Ban AI Arms Race
Updated principles
Intel’s recommendation for Public Policy Principles on AI (October 2017)
- Foster Innovation and Open Development – To better understand the impact of AI and explore the broad diversity of AI implementations, public policy should encourage investment in AI R&D. Governments should support the controlled testing of AI systems to help industry, academia, and other stakeholders improve the technology.
- Create New Human Employment Opportunities and Protect People’s Welfare – AI will change the way people work. Public policy in support of adding skills to the workforce and promoting employment across different sectors should enhance employment opportunities while also protecting people’s welfare.
- Liberate Data Responsibly – AI is powered by access to data. Machine learning algorithms improve by analyzing more data over time; data access is imperative to achieve more enhanced AI model development and training. Removing barriers to the access of data will help machine learning and deep learning reach their full potential.
- Rethink Privacy – Privacy approaches like The Fair Information Practice Principles and Privacy by Design have withstood the test of time and the evolution of new technology. But with innovation, we have had to “rethink” how we apply these models to new technology.
- Require Accountability for Ethical Design and Implementation – The social implications of computing have grown and will continue to expand as more people have access to implementations of AI. Public policy should work to identify and mitigate discrimination caused by the use of AI and encourage designing in protections against these harms.
Lords Select Committee 5 core principles to keep AI ethical (April 2018)
- Artificial intelligence should be developed for the common good and benefit of humanity.
- Artificial intelligence should operate on principles of intelligibility and fairness.
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
- All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
AI UX: 7 Principles of Designing Good AI Products (April 2018)
- Differentiate AI content visually – let people know if an algorithm has generated a piece of content so they can decide for themselves whether to trust it or not.
- Explain how machines think – helping people understand how machines work so they can use them better
- Set the right expectations – especially in a world full of sensational, superficial news about new AI technologies.
- Find and handle weird edge cases – spend more time testing and finding weird, funny, or even disturbing or unpleasant edge cases.
- User testing for AI products (default methods won’t work here).
- Provide an opportunity to give feedback.
The Toronto Declaration on equality and non-discrimination in machine learning systems (May 2018)The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems does not succinctly articulate ethical principles but instead presents arguments under the following headings to address concerns “about the capability of [machine learning] systems to facilitate intentional or inadvertent discrimination against certain individuals or groups of people”.
- Using the framework of international human rights law The right to equality and non-discrimination; Preventing discrimination, and Protecting the rights of all individuals and groups: promoting diversity and inclusion
- Duties of states: human rights obligations State use of machine learning systems; Promoting equality, and Holding private sector actors to account
- Responsibilities of private sector actors human rights due diligence
- The right to an effective remedy
Google AI Principles (June 2018)
- Be socially beneficial.
- Avoid creating or reinforcing unfair bias.
- Be built and tested for safety.
- Be accountable to people.
- Incorporate privacy design principles.
- Uphold high standards of scientific excellence.
- Be made available for uses that accord with these principles.
IBM’s 5 ethical AI principles (September 2018)
- Accountability: AI designers and developers are responsible for considering AI design, development, decision processes, and outcomes.
- Value alignment: AI should be designed to align with the norms and values of your user group in mind.
- Explainability: AI should be designed for humans to easily perceive, detect, and understand its decision process, and the predictions/recommendations. This is also, at times, referred to as interpretability of AI. Simply speaking, users have all rights to ask the details on the predictions made by AI models such as which features contributed to the predictions by what extent. Each of the predictions made by AI models should be able to be reviewed.
- Fairness: AI must be designed to minimize bias and promote inclusive representation.
- User data rights: AI must be designed to protect user data and preserve the user’s power over access and uses
Microsoft Responsible bots: 10 guidelines for developers of conversational AI (November 2018)
- Articulate the purpose of your bot and take special care if your bot will support consequential use cases.
- Be transparent about the fact that you use bots as part of your product or service.
- Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence.
- Design your bot so that it respects relevant cultural norms and guards against misuse.
- Ensure your bot is reliable.
- Ensure your bot treats people fairly.
- Ensure your bot respects user privacy.
- Ensure your bot handles data securely.
- Ensure your bot is accessible.
- Accept responsibility.
CEPEJ European Ethical Charter on the use of artificial intelligence (AI) in judicial systems and their environment, 5 principles (February 2019)
- Principle of respect of fundamental rights: ensuring that the design and implementation of artificial intelligence tools and services are compatible with fundamental rights.
- Principle of non-discrimination: specifically preventing the development or intensification of any discrimination between individuals or groups of individuals.
- Principle of quality and security: with regard to the processing of judicial decisions and data, using certified sources and intangible data with models conceived in a multi-disciplinary manner, in a secure technological environment.
- Principle of transparency, impartiality and fairness: making data processing methods accessible and understandable, authorising external audits.
- Principle “under user control”: precluding a prescriptive approach and ensuring that users are informed actors and in control of their choices.
Women Leading in AI (WLinAI) 10 recommendations (February 2019)
- Introduce a regulatory approach governing the deployment of AI which mirrors that used for the pharmaceutical sector.
- Establish an AI regulatory function working alongside the Information Commissioner’s Office and Centre for Data Ethics – to audit algorithms, investigate complaints by individuals,issue notices and fines for breaches of GDPR and equality and human rights law, give wider guidance, spread best practice and ensure algorithms must be fully explained to users and open to public scrutiny.
- Introduce a new Certificate of Fairness for AI systems alongside a ‘kite mark’ type scheme to display it. Criteria to be defined at industry level, similarly to food labelling regulations.
- Introduce mandatory AIAs (Algorithm Impact Assessments) for organisations employing AI systems that have a significant effect on individuals.
- Introduce a mandatory requirement for public sector organisations using AI for particular purposes to inform citizens that decisions are made by machines, explain how the decision is reached and what would need to change for individuals to get a different outcome.
- Introduce a ‘reduced liability’ incentive for companies that have obtained a Certificate of Fairness to foster innovation and competitiveness.
- To compel companies and other organisations to bring their workforce with them – by publishing the impact of AI on their workforce and offering retraining programmes for employees whose jobs are being automated.
- Where no redeployment is possible, to compel companies to make a contribution towards a digital skills fund for those employees
- To carry out a skills audit to identify the wide range of skills required to embrace the AI revolution.
- To establish an education and training programme to meet the needs identified by the skills audit, including content on data ethics and social responsibility. As part of that, we recommend the set up of a solid, courageous and rigorous programme to encourage young women and other underrepresented groups into technology.
The NHS’s 10 Principles for AI + Data (February 2019)
- Understand users, their needs and the context
- Define the outcome and how the technology will contribute to it
- Use data that is in line with appropriate guidelines for the purpose for which it is being used
- Be fair, transparent and accountable about what data is being used
- Make use of open standards
- Be transparent about the limitations of the data used and algorithms deployed
- Show what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provision
- Generate evidence of effectiveness for the intended use and value for money
- Make security integral to the design
- Define the commercial strategy
IEEE General Principles of Ethical Autonomous and Intelligent Systems (A/IS) (March 2019)
- Human Rights: A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights.
- Well-being: A/IS creators shall adopt increased human well-being as a primary success criterion for development.
- Data Agency: A/IS creators shall empower individuals with the ability to access and securely share their data to maintain people’s capacity to have control over their identity.
- Effectiveness: A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS.
- Transparency: the basis of a particular A/IS decision should always be discoverable.
- Accountability: A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.
- Awareness of Misuse: A/IS creators shall guard against all potential misuses and risks of A/IS in operation.
- Competence: A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.
Ethical issues arising from the police use of live facial recognition technology (March 2019)
9 ethical principles relate to: public interest, effectiveness, the avoidance of bias and algorithmic justice, impartiality and deployment, necessity, proportionality, impartiality, accountability, oversight, and the construction of watchlists, public trust, and cost effectiveness.
Reported here the UK government’s independent Biometrics and Forensics Ethics Group (BFEG) published an interim report outlining nine ethical principles forming a framework to guide policy on police facial recognition systems.
Floridi and Clement Jones’ five principles key to any ethical framework for AI (March 2019)
The European Commission’s High Level Expert Group on AI Ethics Guidelines for Trustworthy AI (April 2019)
Published on 8 April 2019, the EU HLEG AI ethics guidelines for trustworthy AI are detailed in full here.
Draft core principles of Australia’s Ethics Framework for AI (April 2019)
References
[1] Asimov, Isaac (1950): Runaround, in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).
Reported here the UK government’s independent Biometrics and Forensics Ethics Group (BFEG) published an interim report outlining nine ethical principles forming a framework to guide policy on police facial recognition systems.
Floridi and Clement Jones’ five principles key to any ethical framework for AI (March 2019)
- AI must be beneficial to humanity.
- AI must also not infringe on privacy or undermine security.
- AI must protect and enhance our autonomy and ability to take decisions and choose between alternatives.
- AI must promote prosperity and solidarity, in a fight against inequality, discrimination, and unfairness
- We cannot achieve all this unless we have AI systems that are understandable in terms of how they work (transparency) and explainable in terms of how and why they reach the conclusions they do (accountability).
The European Commission’s High Level Expert Group on AI Ethics Guidelines for Trustworthy AI (April 2019)
- Human agency and oversight AI systems should support human autonomy and decision-making, as prescribed by the principle of respect for human autonomy.
- Technical robustness and safety A crucial component of achieving Trustworthy AI is technical robustness, which is closely linked to the principle of prevention of harm.
- Privacy and Data governance Closely linked to the principle of prevention of harm is privacy, a fundamental right particularly affected by AI systems.
- Transparency This requirement is closely linked with the principle of explicability and encompasses transparency of elements relevant to an AI system: the data, the system and the business models.
- Diversity, non-discrimination and fairness In order to achieve Trustworthy AI, we must enable inclusion and diversity throughout the entire AI system’s life cycle.
- Societal and environmental well-being In line with the principles of fairness and prevention of harm, the broader society, other sentient beings and the environment should be also considered as stakeholders throughout the AI system’s life cycle.
- Accountability The requirement of accountability complements the above requirements, and is closely linked to the principle of fairness
Published on 8 April 2019, the EU HLEG AI ethics guidelines for trustworthy AI are detailed in full here.
Draft core principles of Australia’s Ethics Framework for AI (April 2019)
- Generates net-benefits. The AI system must generate benefits for people that are greater than the costs.
- Do no harm. Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes.
- Regulatory and legal compliance. The AI system must comply with all relevant international, Australian Local, State/Territory and Federal government obligations, regulations and laws.
- Privacy protection. Any system, including AI systems, must ensure people’s private data is protected and kept confidential plus prevent data breaches which could cause reputational, psychological, financial, professional or other types of harm.
- Fairness. The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the “training data” is free from bias or characteristics which may cause the algorithm to behave unfairly.
- Transparency & Explainability. People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.
- Contestability. When an algorithm impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.
- Accountability. People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm, even if the impacts are unintended.
References
[1] Asimov, Isaac (1950): Runaround, in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).
Fantastic. I will spread to my colleagues in Cyberpsychology.
ReplyDeleteThank you Simon.
DeleteI've been researching these codes of ehics for my PhD for over a year, and have never seen them all in one place. There were some I didn't even know of, to top it off. SO helpful, thanks
ReplyDeleteGlad to help. Good luck with your PhD!
DeleteLikewise. This is a great resource. Thank you for publishing.
ReplyDeleteIt is fantastic to see all of these ethical principles provided in one place. Thank you!
ReplyDeleteVery useful list. Perhaps add Keith Miller (2010) Principles Governing Moral Responsibility for Computing Artifacts (aka "The Rules") https://ieeexplore.ieee.org/iel5/6294/5778994/05779006.pdf ?
ReplyDeleteApril 2019 Beijing AI Principles from Beijing Academy of Artificial Intelligence: https://www.baai.ac.cn/blog/beijing-ai-principles
ReplyDeleteGreat & good work from Alan. Putting these together in one piece is not an easy task. Thanks for providing what I need for my LL.M.
ReplyDeleteAs a student i learnt from you, Artificial intelligence have a great impact in every jobs. Thanks for potraying principles.
ReplyDeleteThanks very much for this excellent resource! Consider adding McBride and Hoffman (2016) Bridging the Ethical Gap: From Human Principles to Robot Instructions. https://ieeexplore.ieee.org/document/7579396/
ReplyDeletei want to learn AI but every time when i try to start i just confused where to start? i tried to learn about biometric systems how it works, i did it, now its turn to learn AI
ReplyDeleteHello David. I recommend the wonderful book AI a Very Short Introduction by Margaret Boden, see https://global.oup.com/academic/product/artificial-intelligence-a-very-short-introduction-9780199602919
DeleteYour style is unique in comparison to other people I’ve read stuff from. Many thanks for posting when you have the opportunity, Guess I’ll just book mark this site.
ReplyDeleteYour updated roundup on ethical considerations is both thought-provoking and informative. It's great to see a focus on making ethical choices in various aspects. For businesses in need of reliable IT support, especially in Hertfordshire, I recently came across a service worth checking out - business IT support Hertfordshire. They seem dedicated to providing professional IT assistance with a local touch. Kudos on the ethical awareness, and for those interested, you can explore business IT support in Hertfordshire https://totalgroup.co.uk/
ReplyDelete