Welcome to EURACTIV’s Tech Brief, yourweekly update on all things digital in the EU. You can subscribe to the newsletterhere.
“No regulation is regulation already. Until binding and effective legal instruments are adopted and applied, unacceptable patterns and practices will entrench themselves further.”
– Gregor Strojin, vice-chair of the Council of Europe’s Committee on AI
Story of the week: The European Commission will negotiate the AI Convention on fundamental rights, the rule of law and democracy within the Council of Europe on behalf of the EU. It can do so since it has already tabled a proposal in this area, the AI Act. While waiting for a mandate, it has obtained a postponement of the next plenary session from November to January. And on the grounds of the loyal cooperation principle, it has obliged the other EU countries to go into silent mode. Still, it is unclear if the Commission will be able to engage in meaningful negotiations since the AI regulation is still a moving target, and there are already talks of extending the timeline of the committee in charge of drafting the AI treaty.
As EU dynamics hijacked the decision-making process of an independent body, the operation is not without complexity. So far, the AI Act and the treaty have been negotiated by different national ministries, the former focusing on economics and innovation and the latter focusing on justice and fundamental rights. Since the treaty has now become subordinated to the EU’s rulebook, the justice ministries have lost control over the process. Questions are being raised about whether the AI Act provides enough fundamental rights safeguards. Moreover, countries with observer status, like the United States, have taken over the discussion and are now pushing for excluding the private sector from the treaty’s scope. Read more.
Don’t miss: The fact that the EU’s position on Artificial Intelligence is yet to be defined is seen by the Parliament and Council moving in opposite directions regarding the use of biometric recognition systems. After weeks of technical discussions that led to close the first two batches of compromise amendments and part of the third one, the MEPs finally addressed some of the most sensitive topics on Wednesday. The co-rapporteurs have proposed removing all the exceptions to the prohibition on biometric recognition technologies with a wording that extends the ban also to private and online spaces.
The European People’s Party vehemently opposes a complete ban, but according to two parliamentary officials, most political groups are against them. The other controversial topic discussed was the regulation’s scope, but here the discussion was less tense as every group was set to lose (and gain) something from the proposed text. The exemptions of third countries’ public authorities and R&D activities will likely be the part that will see the most refinement in the future. The leading MEPs also proposed extending the EU database for providers of high-risk systems to public bodies, a middle ground between those who want it extended to all users and those who preferred the original text. The next political discussions will likely focus on AI definition, General Purpose AI and Annex III. The technical discussions will continue in parallel. Read more.
Also this week:
- The Italian competition authority lost a major court case against Apple and Amazon.
- Following the Nord Stream sabotage, Brussels is paying increasing attention to the security of submarine cables.
- The UK is getting serious about diverging from the GDPR.
- Six hosting sites for quantum computers in Europe have been announced.
- Rightsholders are pressuring the Commission to crack down on online piracy.
- The EU executive outlined a series of results it wants to bring at the next TTC meeting.
Before we start: The Netherlands is a big proponent of using open-source solutions across the board. We have caught up with the Dutch Minister for Digitalisation Alexandra van Huffelen to discuss her views on electronic digital identity, the AI Act and Data Act.
The Dutch sense for open source
The Netherlands is a big proponent of using open source solutions across the board. We have caught up with the Dutch Minister for Digitalisation Alexandra van Huffelen to discuss her view on the electronic digital identity, the AI Act and …
A message bySalesforce
Strategic focus and external factors such as the Covid crisis have accelerated the digital transformation of the European Commission (EC).
To learn more about the EC digital transformation journey and the important role of platforms read this report:
Council’s steady progress. Meanwhile, the Czech EU Council presidency continued progressing on the AI Act, virtually closing articles from 30 to 85 at the Working Party meeting on Thursday. At the same time, law enforcement confirmed a critical hurdle for the file, with Germany coming back with its original idea of putting this part in a separate chapter or even under another legislative proposal. Meanwhile, the presidency’s approach to General Purpose AI was mostly welcomed, with only isolated voices calling for excluding these systems from the scope. On technical specifications, the EU countries want to pursue the same direction that is being followed in the context of the regulation on machinery products, whereby new harmonised standards would repeal them. The Commission opposes that, contending that it would make the process too complicated. The member states had until Wednesday to provide written comments. A new compromise text is expected by the end of the month.
Metaverse? In the recital. Last week, EURACTIV reported that the European Parliament’s AI Act co-rapporteurs proposed an article specific to the metaverse. The initiative came from Dragoş Tudorache, who presented similar provisions at the amendment stage. However, it did not meet the favour of the majority of the political groups in the technical meeting last Friday, which argued that there was no impact assessment on how these measures could impact such an emerging market and that the Commission is already planning an initiative on this. Nevertheless, TudorFbasisache brought a recital home, with the wording still to be defined.
US Bill of Rights. On Tuesday, the US government presented its initiative for a bill to introduce AI accountability based on five protected areas. These include safeguards from unsafe or ineffective systems, discriminatory algorithms, abusive data practices, transparency, human alternative and remedies. The White House’s AI blueprint is likely to impact the federal government most but it is not binding for the private sector, leaving many people in the industry unimpressed. To what extent the bill is a priority for the Biden administration is also called into question since it is unlikely to pass through Congress before the mid-term elections.
More defeats in court. Fines of over €200 million, levied last year on Apple and Amazon by Italy’s antitrust regulator, were dismissed this week for procedural reasons, including the two companies having been given insufficient time to defend themselves. The penalties were handed out to the tech giants for what the Italian watchdog said was a collusive market agreement between the two concerning the sale of Apple’s Beats earplugs on Amazon and had already been reduced earlier this year due to a miscalculation. Read more.
Fine marked down. According to Reuters, a French court has significantly lowered a record fine levied against Apple in 2020 for what competition authorities said was the tech giant’s anti-competitive behaviour towards its distribution and retail network. The original €1.1 billion fine was reduced to €372 million on Thursday after an appeals court threw out one of the watchdog’s charges. Read more.
A troubled marriage. EU competition regulators will decide in one month whether to give the green light to Microsoft’s acquisition of gaming company Activision Blizzard. However, Reuters reports that the outcome is unlikely to go in the tech giant’s favour and could instead lead to the opening of a larger investigation into the deal. As part of the inquiry, EU regulators have been quizzing game developers on whether Microsoft would likely end up blocking rival developers’ access to Activision’s flagship product, “Call of Duty”, should the deal go ahead. 8 November has been said as the decision deadline, but Brussels’ is just one of several ongoing inquiries into the merger.
Perceived threat. The suspected Russian sabotage of the Nord Stream gas pipelines has put the potential vulnerabilities of Europe’s critical infrastructure under the spotlight in Brussels. In response, the European Commission pledged to increase the protection of submarine cables on Wednesday, which is considered a vulnerable part of the internet infrastructure. Transcontinental submarine data cables account for 99% of the world’s digital communications, which are crucial to the functioning of the global economy. Potential attacks might result in communication outages or interception of confidential data. The European Parliament will also present ideas on how to increase the resilience of this critical infrastructure, NIS2 rapporteur Bart Groothuis said during the plenary debate this week. More here.
Fingerprint security failure. A security flaw in the new Google Pixel 6a’s fingerprint sensor, which allows unregistered prints to unlock the phone, needs to be swiftly addressed, Euroconsumers said this week. Calling on the company to act and address the issue, the consumer group warned of the potentially serious impact on customers’ data protection, particularly given that the print sensor is also used to facilitate digital payments.
Data & Privacy
Privacy Shield 2.0. Privacy wonks will finally be able to read the much-waited executive order that the Biden administration is set to release at 16:00 CET today on the new EU-US data protection agreement. The Commission decision is expected to be ready in November, but it will go through the EDPB opinion and the member states’ approval. Rather than the end of the process, this is just the beginning, without even mentioning the certain legal review that will take place in a Schrems III case.
Divergence is coming. The UK intends to replace the GDPR, breaking even further from the EU’s data protection regime than previously indicated, said Michelle Donelan, the new state secretary for digital, during the Tories’ annual conference on Monday. The country published a Data Reform Bill earlier this year, already casting doubt on the future of Brussels’ post-Brexit adequacy decision. This week, however, the new British government said it would undertake an even more drastic reformation of the existing system and develop a “truly bespoke” data protection system. Donelan also committed to amending the controversial Online Safety Bill and speeding up the rollout of 5G networks in the country. Read more.
Data breaches investigated. The Irish Data Protection Commission (DPC) has submitted a draft decision on its investigation into last year’s major data leak held by Meta. The case concerns Meta’s handling of the online exposure of data belonging to 533 million users, including many high-level EU officials, in April 2021. The DPC launched an investigation after concluding that the platform might have violated the GDPR by not notifying the watchdog of the situation. The decision has now been passed on to other EU authorities for their feedback. Read more.
Questionable basis.The Commission used data on the accuracy and precision of AI tools to detect child sexual abuse material online exclusively from Meta and another tech company, access for documents request filed by former MEP Felix Reda showed. Independent research, control tests or further details on the underlying datasets of the tech companies were not considered in the impact assessment for the proposal of the CSAM regulation. According to Reda, the Commission should not rely merely on industry data, where it is unclear what methodology was used for the tests that found a 99% precision rate for child abuse detection tools. The number of false positives is relevant because intimate messages and photos of innocent people could end up on the screens of investigators. More here.
Digital Markets Act
Move over for the enforcers. Rita Wezenbeek is set to take over Gerard de Graaf’s vacant post as the director for platform regulation in DG CNECT, and Filomena Chirico from Breton’s cabinet has been tasked with leading the DMA enforcement unit, Politico reported. Tellingly, they both worked for DG COMP in the past, which is a clear advantage point for such key enforcement roles. While Wezenbeek gained a shiny new position in DG CNECT’s most prominent directorate, her move might benefit Breton’s initiative on the fair share, on which she has kept a cautious approach so far. Many wonder who will take her post as the new director for connectivity. Kamila Kloc is said to be the top choice for the replacement, while other names circulating are Pearse O’Donohue and Carlota Reiners Fontana. The new appointments also fit in the broader chessboard of DG CNECT’s leadership. At this time of the mandate, Commissioners start making strategic placements to continue their legacy after the elections. In this regard, Anthony Whelan is said to be ready to return from the von der Leyen cabinet to a management role, while the eternal Roberto Viola might have to leave his post sooner or later.
DMA/DSA working group. The informal intergroup of the European Parliament meant to monitor the implementation of the DMA and DSA has been divided into two sub-groups headed by Andreas Schwab and Christel Schaldemose, respectively. The new body is still at an early stage and lacks a work programme, with the first meeting unlikely to occur earlier than December. The lawmakers are well aware that the Commission would have little to share before the implementing and delegated acts are published. The model being followed is that of the ECON competition working group, which is supported by the committee’s secretariat and has its own webpage. Contrarily to the competition working group, though, the intention here is not to regularly engage with the industry but to provide a platform to scrutinise the Commission’s work.
Digital Services Act
DSA adoption. The EU Council formally adopted the Digital Services Act on Tuesday. It is set to be signed by the presidents of the EU Council and Parliament on 19 October, after which it will be published in the Official Journal of the European Union. The entry into force is expected in January 2024.
The untrusted enforcer. The DSA governance architecture results from Brussels’ “distrust” of Ireland as a regulator of Big Tech, EU digital chief Margrethe Vestager said in an interview this week. Member states remain at odds over whether Dublin is doing a good enough job at overseeing the major firms in its jurisdiction. The landmark legislation, approved earlier this year, was born out of a lack of faith from other countries that Ireland would be an adequate enforcer of EU law. Vestager’s words are nothing new, but it is the first time a senior EU official spelt them publicly.
Declaration in sight. The text is expected to be finalised next week following six technical meetings on the European Declaration on Digital Rights and Principles for the Digital Decade. MEPs have been particularly keen on including strong wording regarding protecting privacy and workers’ rights in the context of using Artificial Intelligence. The final text is expected to be shared with the member states by the end of October, with a COREPER discussion tentatively scheduled for 14 November. The final signature is expected in December.
Follow the path. The Czech approach to the rebuttable presumption in the platform worker directive, reported by EURACTIV three weeks ago, was met positively by many member states. The job is far from done, though, as several points have emerged that need to be further clarified, particularly on how the criteria-based presumption would be activated in practice. The question of how the directive would apply to the self-employed has also been raised, making some member states request a legal opinion to the Council’s legal service. Extending the proposal’s scope to intermediaries such as recruitment agencies also found support among the EU countries, but some have requested the Presidency to further elaborate on its practical implications.
STR pushed further. The EU proposal on short-term rental is said to be postponed from 28 October to mid-November, increasing the original delay.
Collaborative efforts.The European High-Performance Computing Joint Undertaking announced on Tuesday (4 October) its selection of the six hosting sites for the “first European quantum computers” in the Czech Republic, Germany, Spain, France, Italy and Poland. Academic researchers and industry across Europe will be able to access these computers, which are intended to solve complex problems relating to health, climate change or logistics using a fraction of the resources needed by traditional computers. The plan is to make these new quantum computers available from late 2023, and the envisaged investment amounts to over €100 million. More here.
Silicon wafers made in Sicily. The Commission this week approved €292.5 million in funding for Italy to support STMicroelectronics’ construction of a semiconductor plant in Catania, Sicily. The fab will fall under the Chips Act’s first-of-a-kind category, as it is the first to produce silicon carbide wafers in Europe, which are used as a base for certain microchips and devices such as electric vehicles and renewable batteries. The money will be made available via the Recovery and Resilience Facility. Along with the recently-agreed project for Intel’s packaging factory in Vigasio, Veneto, investments in semiconductors are among the last moves of the outgoing Draghi government.
Start-ups’ grandeur. France, home to a growing start-up scene, has its eyes on becoming a European leader in start-up development, the head of France Digitale told EURACTIV. Paris shares Brussels’ goal of seeing the emergence of large European companies, Maya Noël said, noting that French cities are increasingly relying on the French Tech movement in developing innovation clusters across the country. Read more.
Protocol needed.The European Parliament’s investigation committee of Pegasus and other spyware (PEGA) held a Thursday hearing where MEPs who have become victims spoke about their cases. Diana Riba, the vice-chair of the Pegasus Inquiry Committee and a victim herself, asked for establishing a protocol in the European Parliament for when MEPs are affected by spyware. Eva Kaili, vice-president of the European Parliament, suggested that the parliamentary service of checking phones, which revealed the surveillance of MEPs, be extended to journalists to deter further espionage. The next steps of the PEGA committee are a mission to Greece in early November and to Hungary in February. A mission to Spain is still not on the agenda.
Rightsholders’ anti-piracy charge. 108 media, sports, music and culture organisations wrote to the Commission this week, calling for a legislative instrument to tackle piracy of live content, which the signatories argue is having a major impact on their sectors. Piracy, the groups say, has spiked in recent years, climbing to particular heights during the COVID-19 pandemic, and the EU response should take the form of regulation rather than a non-legislative instrument, which they argue would be “inadequate and insufficient”. Read more.
Public funding criteria. Public funding of the media in Austria will be updated to include criteria including gender equality and the following of editorial guidelines, the country’s government announced this week. The new €20 million funding package is being introduced in response to long-standing criticism of Austrian media funding allocation for its lack of transparency and privileging of larger market actors, and with the overall aim of addressing these issues and boosting diversity in the sector. Read more.
Mission to Greece. Media watchdog Reporters Without Borders (RSF) will visit Greece next week to examine the country’s worsening press freedom situation. Greece was the worst-ranked EU country in RSF’s 2022 World Press Freedom Index and has drawn attention in recent months due to a spyware scandal which revealed that the country’s secret services had bugged the phones of several journalists. RSF’s visit, from 9-11 October, will focus on the freedom, independence and sustainability of the press, the head of the organisation’s EU/Balkans desk told EURACTIV. Read more.
Elon’s last U-turn. Elon Musk is heading back towards the deal he previously walked away from, offering this week to buy Twitter for his original offer of $44 billion. This week, a US judge halted the two parties’ brewing lawsuit to allow the billionaire to complete the deal originally struck in April. Musk tried to renegotiate the agreement shortly after making it, prompting the platform to file a suit against him. As of this week, however, the deal appears to be back on. Musk has said it will act as “an accelerant to creating X, the everything app”, which some observers believe could be modelled on the “super-apps”, prominent in China, which offer users multiple services in one place, from food deliveries to communication.
Meta’s Middle-Eastern policy. This week, the Facebook Oversight Board, which deals with notable content moderation questions, launched a case linked to the ongoing protests in Iran. The Board is seeking public feedback on how its Violence and Incitement policy should balance the newsworthiness of posts against their potential rhetorical messages and how criticism of the Iranian government and content related to the current protests should be handled. The case stems from the removal of a cartoon caricaturing Iran’s leader, Ayatollah Khameini, and calling for the death of the government from a public group in July this year. The decision was appealed and eventually reversed on the grounds of newsworthiness.
Tangled cables soon in the past.The European Parliament on Tuesday adopted the legislation that will introduce common chargers for all mobile phones and other electronic devices in the European market as of late 2024. The law will force all companies, particularly Apple, which uses its own Lightning connector to charge iPhones, to adapt. This will reduce tons of electronic waste each year and enable consumers to use their electrical appliances more easily and sustainably. The idea of a universal charger is anything but new and even though the number of mobile phone chargers has been reduced from 32 to three in the last decade, a voluntary solution from the industry was not achievable.
A Christmas present? The public consultation on the European Commission’s initiative on senders-pay (aka fair share or traffic tax) is set to be open before Christmas, several sources told EURACTIV, contradicting Breton’s recent interview to Le Monde that anticipated it for early next year. A possible date circulating is the 21 December, which would go along the Commission’s trend of making other people work during the holidays – while preaching about work-life balance. The timing would fit the intention to publish the initiative in the first quarter of 2023.
Missing the 5G train. The EU will miss its digital decade goals unless it alters the current policy framework, according to research by GSMA. While 5G is being adopted in Europe faster than ever, the region is still lagging behind the US and Asia, and nearly one-third of the population will remain without coverage by 2025. As such, GSMA calls for Brussels to increase its focus on fostering the market conditions that will spur investment in infrastructure and reiterated its call for the fair contribution principle regarding network costs.
Looking for victories. The Trade and Technology Council is in dire need of showing some concrete results ahead of the next summit in December. According to a presentation that the Commission gave to the member states on Monday, the two blocs are trying to complete a roadmap on trustworthy AI, a joint exercise on supply chain disruptions and the identification of a joint infrastructural project to be showcased at the next ministerial meeting. Read more.
Future of the Internet event. The EU and US are organising a high-level multi-stakeholder event in Prague on 2 November to follow up on the Declaration on the Future of the Internet signed in April. According to an early programme seen by EURACTIV, the conference will also discuss how the declaration could support the building of public sector metaverses for the common good, with the Commission set to showcase its Destination Earth initiative. Disinformation on the war in Ukraine will be another major focus of the event.
What else we’re reading this week:
How the European Union can best apply the Digital Markets Act (Bruegel)
Secretive Chip Startup May Help Huawei Circumvent US Sanctions (Bloomberg)
And while the European Commission is restricting the use of facial recognition in public places for companies, it's left wide exemptions for law enforcements to deploy the tech in cases including a search for missing children, preventing terrorist attacks or locating armed and dangerous criminals.
The European Union's (EU) proposed plan to regulate the use of artificial intelligence (AI) threatens to undermine the bloc's social safety net, and is ill-equipped to protect people from surveillance and discrimination, according to a report by Human Rights Watch.
The European Commission's proposed Artificial Intelligence (AI) Act attempts to regulate a wide range of AI applications, aligning them with EU values and fundamental rights through a risk-based approach.
AI should be governed under the same rules as humans. Manufacturers should agree to abide by general ethical guidelines mandated by international regulation. There should be understanding of how AI logic and decisions are made.
It can only be processed if at least one of two conditions apply. The first is that the data subject has given their active, free and informed consent. The second is that the data processor can indisputably prove that the use of FRT serves a legitimate aim in a proportionate way.
Facial Recognition (FR) technology can be used in a number of ways by the Met, including to prevent and detect crime, find wanted criminals, safeguard vulnerable people, and to protect people from harm – all to keep the people we serve safe.
Artificial Intelligence Act.
|European Union regulation|
|Text with EEA relevance|
|Journal reference||COM/2021/206 final|
Why is AI regulation necessary? We need to regulate AI for two reasons. First, because governments and companies use AI to take decisions that can have a significant impact on our lives. For example, algorithms that calculate school performance can have a devastating effect.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms.
AI systems which have an adverse impact on people's safety or their fundamental rights are considered high-risk. This includes where: the AI system is intended to be used as a safety component of a product or is itself a product covered by legislation listed in Annex II (broadly product safety legislation)
The AI Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. The law assigns applications of AI to three risk categories.
Bias in AI systems is often seen as a technical problem, but the NIST report acknowledges that a great deal of AI bias stems from human biases and systemic, institutional biases as well.
Ultimately, liability for negligence would lie with the person, persons or entities who caused the damage or defect or who might have foreseen the product being used in the way that it was used.
Government organizations, including the NGA, CIA, and FBI, are all actively using artificial intelligence to improve the data analysis process.
The only law currently in effect is the California Consumer Privacy Act (CCPA). It provides consumers certain rights related to their facial recognition data, such as the right to access, opt-out of the sale of, and delete their data.
Strict limits on private use of face recognition
While EFF does not support a ban on private use of face recognition, we do support strict limits. Specifically, laws should subject private parties to the following rules: Do not collect a faceprint from a person without their prior written opt-in consent.
That images of people's faces which allow or confirm the identification of a person are biometric data and therefore data controllers and processors require a lawful basis under both Article 6 and Article 9 of the GDPR to process that data.
- Mask Up, Be Safe.
- Dress to Unimpress. Make yourself less memorable to both humans and machines by wearing clothing as dark and pattern-free as your commitment to privacy. ...
- Delete the Deets. ...
- Stay Cool. ...
- Lose Your Car. ...
- Run Facial Interference. ...
- More Great WIRED Stories.
There are no federal laws governing the use of facial-recognition technology, which has led states, cities, and counties to regulate it on their own in various ways, particularly when it comes to how law enforcement agencies can use it.
It helps in detectingindividuals or groups that need close surveillance, usually for lawful cause.AI Facial Recognition Technology can identify criminals at the scene of an event. It can further help in recognizing those criminals who roam free. In another way, it can be a great factor to make the cities safer.
London: The European Union (EU) is planning to enforce the new Digital Markets Act (DMA) to tame the Big Tech companies in the spring next year.
The horizontal sector is concerned with environmental legislation on various matters which cut. across different environmental subject areas, as opposed to regulations which apply to a specific. sector, e.g. water or air. Rather than to regulate a specific area, these items of legislation are. more procedural.
The proposed European Health Data Space (EHDS) is not a mere regulation, it is a vision for the future of health and care for Europe. More effective use of health data is necessary to address diseases impacting often vulnerable communities.
AI drives down the time taken to perform a task. It enables multi-tasking and eases the workload for existing resources. AI enables the execution of hitherto complex tasks without significant cost outlays. AI operates 24x7 without interruption or breaks and has no downtime.
In conclusion, governments should both promote and control developments in AI and ML, this will enable them to reap maximum benefits while limiting or eliminating its flaws which could be disastrous to its citizens.
A Max-Planck Institute study suggests humans couldn't prevent an AI from making its own choices. The researchers used Alan Turing's "halting problem" to test their theory. Programming a superintelligent AI with "containment algorithms" or rules would be futile.
How many countries have AI regulations? At least 60 nations have adopted artificial intelligence laws and regulations since 2017, a flurry of action that almost matches the rate at which new AI is being implemented.
Whether and how to regulate AI has become a lively discussion all over the world. Only recently, some rather specific regulations have come into force in a number of countries including France, Germany, China and Canada.
So, what is Responsible AI? Responsible AI is the practice of designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society—allowing companies to engender trust and scale AI with confidence.
The European Parliament takes decisions on EU laws together with the Council of the European Union. If the Parliament and the Council cannot agree on a piece of legislation, there will be no new law.
There are two types of bias in AI. One is algorithmic AI bias or “data bias,” where algorithms are trained using biased data. The other kind of bias in AI is societal AI bias. That's where our assumptions and norms as a society cause us to have blind spots or certain expectations in our thinking.
Regulatory frameworks are legal mechanisms that exist on national and international levels. They can be mandatory and coercive (national laws and regulations, contractual obligations) or voluntary (integrity pacts, codes of conduct, arms control agreements).
- Thinking Humanly (The Cognitive approach)
- Acting Humanly (The Turing Test approach)
- Thinking Rationally (The Laws of Thought approach)
- Acting Rationally (The Rational Agent approach)
The Maastricht Treaty (formally known as the Treaty on European Union), which was signed on February 7, 1992, created the European Union.
No matter what your AI project, it is critical that you adopt an ethics-first approach to AI development not only to safeguard your company against undue risk or even ensure regulatory compliance, but ensure you develop AI technologies that actually deliver real, long-term value.
Machine learning bias, also sometimes called algorithm bias or AI bias, is a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process.
Algorithms can enhance already existing biases. They can discriminate. They can threaten our security, manipulate us and have lethal consequences. For these reasons, people need to explore the ethical, social and legal aspects of AI systems.
Liability inquiries often start—and end—with the driver of the car that crashed or the physician that gave faulty treatment decision. Granted, if the end-user misuses an AI system or ignores its warnings, he or she should be liable.
AI designers and developers are responsible for considering AI design, development, decision processes, and outcomes. Human judgment plays a role throughout a seemingly objective system of logical decisions.
Even if (some) AIs can act or decide (i.e. have agency), they lack the capacities for moral agency, and so the responsibility for their actions or decisions—actions and decisions delegated to them by humans—remains and should remain with the human agents who develop and use the technology.
AI methods are being used to identify people who wish to remain anonymous; to infer and generate sensitive information about people from non-sensitive data; to profile people based upon population-scale data; and to make consequential decisions using this data, some of which profoundly affect people's lives.
By detecting suspicious activities, AI can prevent crimes, and help investigators identify suspects more rapidly, ensuring stronger public safety and increased community confidence in law enforcement and criminal justice in general. AI also has a significant use in courts of law.
The introduction of artificial intelligence in the judicial systems can aid judges with the necessary resources needed to make their work easier but it will never replace the existence of Judges and their expertise.
Some techniques we use to do this include interviews, wiretaps, and data analysis. While the need for intelligence hasn't changed, the threats confronting the country have evolved—and we're constantly adapting to combat the threats we're facing at home and abroad.
Artificial intelligence is the capability of a computer system to mimic human cognitive functions such as learning and problem-solving. Through AI, a computer system uses maths and logic to simulate the reasoning that people use to learn from new information and make decisions.
Machine learning is a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.
From morning to night, going about our everyday routines, AI technology drives much of what we do. When we wake, many of us reach for our mobile phone or laptop to start our day. Doing so has become automatic, and integral to how we function in terms of our decision-making, planning and information-seeking.
Humans also need breaks and time offs to balance their work life and personal life. But AI can work endlessly without breaks. They think much faster than humans and perform multiple tasks at a time with accurate results. They can even handle tedious repetitive jobs easily with the help of AI algorithms.
Voice assistants, image recognition for face unlock in cellphones, and ML-based financial fraud detection are examples of AI software currently being used in everyday life. Typically, just downloading AI software from an online store and having no other devices is required.
Artificial intelligence's impact on society is widely debated. Many argue that AI improves the quality of everyday life by doing routine and even complicated tasks better than humans can, making life simpler, safer, and more efficient.
The benefits of AI
It can also automate complex processes and minimize downtime by predicting maintenance needs. Improved accuracy and decision-making: AI augments human intelligence with rich analytics and pattern prediction capabilities to improve the quality, effectiveness, and creativity of employee decisions.
Their principles underscore fairness, transparency and explainability, human-centeredness, and privacy and security.
Machine learning is used in internet search engines, email filters to sort out spam, websites to make personalised recommendations, banking software to detect unusual transactions, and lots of apps on our phones such as voice recognition.
Factors that make machine learning difficult are the in-depth knowledge of many aspects of mathematics and computer science and the attention to detail one must take in identifying inefficiencies in the algorithm. Machine learning applications also require meticulous attention to optimize an algorithm.
Since machine learning needs you to know computer programming, statistics and data evaluation, the future scope of your machine learning career can also be in leadership roles in automation or analytics environments that use data science, big data analysis, AI integration etc.