EU's new AI rules ignite battle over data transparency

0

EU's new AI rules ignite battle over data transparency

 New set of rules concerning AI in the European Union will require companies that use it to explain what data is used to train their systems, thus, giving the public an access to the information that has been previously hidden by the industry. 


 Since its release of ChatGPT in November 2021 to the public and with backing from Microsoft (MSFT. O), opens new tab there has been an outburst of interest and activity in the generative AI – a suite of applications that can be used to quickly generate textual, visual and audio content. But as the industry thrives, critics have emerged wondering how these companies get their data to feed their models, and whether sharing bestselling books and Hollywood movies to them without the authors’ consent is not violating the copyright law. 


 There is a fresh EU AI Act that was just enacted recently and is gradually being deployed in phases for the next two years depending on the continent whereby while the regulators do prepare for the framework of new laws, the businesses are left to struggle with new responsibilities. Still, how the specifics of some of these rules will pan out as applied is still up in the air. The first of the more disputable parts of the Act is that any organisation offering general AI services, including, but not limited to, ChatGPT, will have to disclose the content used for training their AI in the form of ‘detailed summaries’. The new AI Office was created and stated its intention to publish a template that organisations should use in early 2025 after a structural consultation. 


 Although specifics can still be negotiated, AI firms are extremely reluctant to disclose what data their models were trained on, the information is described as proprietary that would be detrimental if exposed to competitors. 


Ideally, Photoroom’s CEO Matthieu Riouf stated that he would like ‘to have access to my competitors datasets and vice versa. ’ 

 Cooking was all he could say in his defense and consolation before he stopped on the thought that was abruptly interrupted by his sudden realization. ‘There is a specific ingredient in the recipe for which the best chefs would never reveal to others, the analogy of an ingredient that makes it different or the ‘je ne sais quoi’ as it is termed’. 
 Whether these transparency reports will be detailed down to a certain level will have massive consequences on small AI start-ups and on large tech companies like Google (NASDAQ: GOOG) and Meta (NASDAQ: META) that have made the technology the core of their future business plans. 

 SHARING TRADE SECRETS 



 Recently, several high-profile tech businesses such as Google, OpenAI, Stability AI, among others, closed the previous year with varied legal concerns regarding creators accusing firms of misusing their content to train their models. 

 However, concerning copyright questions have not been thoroughly tested, despite existence of numerous executive orders signed by the U. S President Joe Biden that address the security risks of AI. There have been bipartisan calls in the Congress for the technologies companies to compensate rights holders for data. 

 That is why today the leading tech companies have been actively concluding contracts for content licensing with TV channels, newspapers and websites. Some of its clients include the Financial Times for which OpenAI agreed to provide services; OpenAI also engaged The Atlantic, Google partnered with NewsCorp (NWSA. O) for its social media affiliate, Reddit. 

 Nonetheless, OpenAI was an object of criticism back in March as CTO Mira Murati refused to answer the question of the Wall Street Journal about YouTube videos being used to train Sora, an AI tool that allegedly generates videos, as that would violate the company’s terms and conditions. 

 Last month, the organization was criticized for using an AI voice said to be “uncannily alike” her in a live demonstration of the latest rendition of ChatGPT which was performed by Hollywood actress, Scarlett Johansson.Thomas Wolf, the co-founder of the top AI startup hugging face, said he would like to have more transparency; however, that feeling was not unanimous in the industry. “As for their future it is rather challenging to predict how it may be – there is a long way to go and a lot to be decided,” he said. 

 Holders of higher political offices throughout the continent are still in different youth. 
 Dragos Tudorache who was among the lawmakers involved in the process of passing the actual AI act argued that AI firms should force them to release their databases. 
 ‘These licenses have to be specific so that when Scarlett Johannsion, Beyonce or whoever is to know if their work, their songs, their voice, their art or their science was used in training the algorithm,’ he said. 

 A Commission official said: According to the AI Act, some of the rightly recognized important aspects for achieving an appropriate balance between the legitimate interest in protecting trade secrets, on the one hand, and the legitimate interest of the parties with legitimate interests, including the rights of the holders of copyrights and other rights under Union law. 

 In private discussion, the French government has been reportedly against the introduction of regulation that may slow down the European AI startups. 

 Acording to French finance minister Bruno Le Maire during the Viva Technology conference in Paris in May, he would like to see Europe not only buy AI technologies from America and China, but to become a leader in this sphere themselves. 

 ”It is about time that Europe, which thinks of controls and standards, and which has formulated numerous of them, learned that you have to invent before you legislate,” he stated. “Instead, you end up regulating technologies that may you do not fully understand, or regulating them inefficiently because you do not fully understand them. ”
Tags

Post a Comment

0Comments
Post a Comment (0)