• chevron_right

      Music Industry Threatens ‘Deepfake AI Music’ Service With Legal Action

      news.movim.eu / TorrentFreak · Wednesday, 20 March - 16:10 · 3 minutes

    jammable Over the past year, new artificial intelligence tools and services have been surfacing everywhere.

    The same can be said for AI-related lawsuits and complaints, which have been piling up by the dozen.

    In the UK, music industry group BPI has enetered the mix, targeting AI-generated voice models and cover tracks. This technology, which partly relies on copyrighted recordings, has been controversial for a while.

    Voicify Faces Legal Pressure

    AI vocal-cloning service Voicify was previously called out by the RIAA. In a recommendation to the U.S. Trade Representative, the recording label group asked the USTR to put the site on its list of notorious piracy sites . The USTR did not include the site in its report, however, and Voicify continued its operations as usual.

    After the RIAA put a spotlight on Voicify, the BPI maintained the pressure in a letter to the site’s operators, urging them to stop all copyright-infringing activity. If not, the BPI would consider follow-up steps, implying a full-blown lawsuit.

    The letter was sent privately on February 26th but aside from the legal threat, its contents remain unknown. While Voicify failed to respond appropriately according to the BPI, the service did announce a major change soon after by rebranding its website to “ Jammable “.

    jammable

    Rebrand Can’t Escape Lawsuit Threat

    According to the website, the brand change was motivated by the service’s move away from just being an ‘AI Voice Platform’. However, a source familiar with the situation informs TorrentFreak that a ‘legal matter’ played a yet role in the decision. That could very well be related to BPI’s letter.

    Perhaps not coincidentally, the news about BPI’s legal threat against Voicify/Jammable broke in The Times , just days after the rebrand. As ‘a matter of policy’, BPI can’t say whether it reached out to The Times first, or vice versa, but the added pressure helps its case.

    An accompanying message released by BPI’s General Counsel Kiaron Whitehead is also crystal clear.

    “The music industry has long embraced new technology to innovate and grow, but Voicify (now known as Jammable), and a growing number of others like them, are misusing AI technology by taking other people’s creativity without permission and making fake content. In so doing, they are endangering the future success of British musicians and their music.”

    Massive music ‘Deepfake’ Service

    With a library featuring thousands of voice models, the BPI considers Jammable one of the world’s largest and most egregious deepfake AI music sites. In its letter the BPI gave the voice cloning site the option to respond and avoid legal action but thus far the BPI remains dissatisfied.

    While AI-related copyright issues are still rather novel and mostly unexplored from a legal perspective, the music group is convinced it has the law on its side. The BPI’s complaint centers around Jammable’s purported use of copyrighted music recordings to create voice models and AI covers.

    In theory, these types of services could enable people to create a cover of a Frank Sinatra song using the voice of Homer Simpson, if they’d like to hear that.

    This use of copyrighted music, combined with the commercial nature of Jammable, is not allowed according to BPI.

    Thus far, music-related AI lawsuits haven’t appeared in UK courts so if the BPI decides to follow up on its threat, this would be the first. For now, however, there is no sign of legal action.

    Several other music industry entities including the Musicians’ Union and UK MUSIC support the efforts to protect rightsholders against AI troubles.

    “Jammable is just one worrying example of AI developers encroaching on the personal rights of music creators for their own financial gain,” Musicians’ Union General Secretary Naomi Pohl says.

    “It can’t be right that a commercial enterprise can just steal someone’s voice in order to generate unlimited soundalike tracks with no labelling to clarify to the public the output tracks are not genuine recordings by the original artist, no permission from the original artist and no share of the money paid to them either.”

    Speaking with TorrentFreak, a BPI spokesperson says that it has only sent a letter to Voicify/Jammable, not to any similar services. We also asked Jammable for a comment on the legal threat but, at the time of publication, we have yet to hear back.

    From: TF , for the latest news on copyright battles, piracy and more.

    • chevron_right

      1960s chatbot ELIZA beat OpenAI’s GPT-3.5 in a recent Turing test study

      news.movim.eu / ArsTechnica · Friday, 1 December - 21:27 · 1 minute

    An illustration of a man and a robot sitting in boxes, talking.

    Enlarge / An artist's impression of a human and a robot talking. (credit: Getty Images | Benj Edwards)

    In a preprint research paper titled "Does GPT-4 Pass the Turing Test?", two researchers from UC San Diego pitted OpenAI's GPT-4 AI language model against human participants, GPT-3.5, and ELIZA to see which could trick participants into thinking it was human with the greatest success. But along the way, the study, which has not been peer-reviewed, found that human participants correctly identified other humans in only 63 percent of the interactions—and that a 1960s computer program surpassed the AI model that powers the free version of ChatGPT.

    Even with limitations and caveats, which we'll cover below, the paper presents a thought-provoking comparison between AI model approaches and raises further questions about using the Turing test to evaluate AI model performance.

    British mathematician and computer scientist Alan Turing first conceived the Turing test as "The Imitation Game" in 1950 . Since then, it has become a famous but controversial benchmark for determining a machine's ability to imitate human conversation. In modern versions of the test, a human judge typically talks to either another human or a chatbot without knowing which is which. If the judge cannot reliably tell the chatbot from the human a certain percentage of the time, the chatbot is said to have passed the test. The threshold for passing the test is subjective, so there has never been a broad consensus on what would constitute a passing success rate.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      ChatGPT is one year old. Here’s how it changed the world.

      news.movim.eu / ArsTechnica · Thursday, 30 November - 17:01 · 1 minute

    A toy tin robot blowing out a birthday candle.

    Enlarge / An artist's interpretation of what ChatGPT might look like if embodied in the form of a robot toy blowing out a birthday candle. (credit: Aurich Lawson | Getty Images)

    One year ago today, on November 30, 2022, OpenAI released ChatGPT . It's uncommon for a single tech product to create as much global impact as ChatGPT in just one year.

    Imagine a computer that can talk to you. Nothing new, right? Those have been around since the 1960s . But ChatGPT, the application that first bought large language models (LLMs) to a wide audience, felt different. It could compose poetry, seemingly understand the context of your questions and your conversation, and help you solve problems. Within a few months, it became the fastest-growing consumer application of all time. And it created a frenzy.

    During these 365 days, ChatGPT has broadened the public perception of AI, captured imaginations, attracted critics , and stoked existential angst. It emboldened and reoriented Microsoft, made Google dance , spurred fears of AGI taking over the world, captivated world leaders , prompted attempts at government regulation , helped add words to dictionaries , inspired conferences and copycats , led to a crisis for educators, hyper-charged automated defamation , embarrassed lawyers by hallucinating, prompted lawsuits over training data, and much more.

    Read 12 remaining paragraphs | Comments

    • chevron_right

      Sam Altman officially back as OpenAI CEO: “We didn’t lose a single employee”

      news.movim.eu / ArsTechnica · Thursday, 30 November - 14:37 · 1 minute

    A glowing OpenAI logo on a light blue background.

    Enlarge (credit: OpenAI / Benj Edwards)

    On Wednesday, OpenAI announced that Sam Altman has officially returned to the ChatGPT-maker as CEO—accompanied by Mira Murati as CTO and Greg Brockman as president—resuming their roles from before the shocking firing of Altman that threw the company into turmoil two weeks ago. Altman says the company did not lose a single employee or customer throughout the crisis.

    "I have never been more excited about the future. I am extremely grateful for everyone’s hard work in an unclear and unprecedented situation, and I believe our resilience and spirit set us apart in the industry," wrote Altman in an official OpenAI news release . "I feel so, so good about our probability of success for achieving our mission."

    In the statement, Altman formalized plans that have been underway since last week: ex-Salesforce co-CEO Bret Taylor and economist Larry Summers have officially begun their tenure on the "new initial" OpenAI board of directors. Quora CEO Adam D’Angelo is keeping his previous seat on the board. Also on Wednesday, previous board members Tasha McCauley and Helen Toner officially resigned . In addition, a representative from Microsoft (a key OpenAI investor) will have a non-voting observer role on the board of directors.

    Read 8 remaining paragraphs | Comments

    • chevron_right

      Stable Diffusion Turbo XL can generate AI images as fast as you can type

      news.movim.eu / ArsTechnica · Wednesday, 29 November - 21:20

    Example images generated using Stable Diffusion XL Turbo.

    Enlarge / Example images generated using Stable Diffusion XL Turbo. (credit: Stable Diffusion XL Turbo / Benj Edwards)

    On Tuesday, Stability AI launched Stable Diffusion XL Turbo , an AI image-synthesis model that can rapidly generate imagery based on a written prompt. So rapidly, in fact, that the company is billing it as "real-time" image generation, since it can also quickly transform images from a source, such as a webcam , quickly.

    SDXL Turbo's primary innovation lies in its ability to produce image outputs in a single step, a significant reduction from the 20–50 steps required by its predecessor. Stability attributes this leap in efficiency to a technique it calls Adversarial Diffusion Distillation (ADD). ADD uses score distillation, where the model learns from existing image-synthesis models, and adversarial loss, which enhances the model's ability to differentiate between real and generated images, improving the realism of the output.

    Stability detailed the model's inner workings in a research paper released Tuesday that focuses on the ADD technique. One of the claimed advantages of SDXL Turbo is its similarity to Generative Adversarial Networks (GANs), especially in producing single-step image outputs.

    Read 6 remaining paragraphs | Comments

    • chevron_right

      Google’s DeepMind finds 2.2M crystal structures in materials science win

      news.movim.eu / ArsTechnica · Wednesday, 29 November - 18:42

    Lab picture

    Enlarge / The researchers identified novel materials by using machine learning to first generate candidate structures and then gauge their likely stability. (credit: Marilyn Sargent/Berkeley Lab)

    Google DeepMind researchers have discovered 2.2 million crystal structures that open potential progress in fields from renewable energy to advanced computation, and show the power of artificial intelligence to discover novel materials.

    The trove of theoretically stable but experimentally unrealized combinations identified using an AI tool known as GNoME is more than 45 times larger than the number of such substances unearthed in the history of science, according to a paper published in Nature on Wednesday.

    The researchers plan to make 381,000 of the most promising structures available to fellow scientists to make and test their viability in fields from solar cells to superconductors. The venture underscores how harnessing AI can shortcut years of experimental graft—and potentially deliver improved products and processes.

    Read 13 remaining paragraphs | Comments

    • chevron_right

      Amazon unleashes Q, an AI assistant for the workplace

      news.movim.eu / ArsTechnica · Wednesday, 29 November - 17:13

    The Amazon Q logo.

    Enlarge / The Amazon Q logo. (credit: Amazon)

    On Tuesday, Amazon unveiled Amazon Q , an AI chatbot similar to ChatGPT that is tailored for corporate environments. Developed by Amazon Web Services (AWS), Q is designed to assist employees with tasks like summarizing documents, managing internal support tickets, and providing policy guidance, differentiating itself from consumer-focused chatbots. It also serves as a programming assistant.

    According to The New York Times , the name "Q" is a play on the word “question" and a reference to the character Q in the James Bond novels, who makes helpful tools. (And there's apparently a little bit of Q from Star Trek: The Next Generation thrown in, although hopefully the new bot won't cause mischief on that scale.)

    Amazon Q's launch positions it against existing corporate AI tools like Microsoft's Copilot , Google's Duet AI , and ChatGPT Enterprise . Unlike some of its competitors, Amazon Q isn't built on a singular AI large language model (LLM). Instead, it uses a platform called Bedrock, integrating multiple AI systems, including Amazon's Titan and models from Anthropic and Meta .

    Read 5 remaining paragraphs | Comments

    • chevron_right

      Mother plucker: Steel fingers guided by AI pluck weeds rapidly and autonomously

      news.movim.eu / ArsTechnica · Tuesday, 28 November - 23:09 · 1 minute

    The Ekobot autonomous weeding robot roving around an onion field in Sweden.

    Enlarge / The Ekobot autonomous weeding robot roving around an onion field in Sweden. (credit: Ekobot AB)

    Anybody who has pulled weeds in a garden knows that it's a tedious task. Scale it up to farm-sized jobs, and it becomes a nightmare. The most efficient industrial alternative, herbicides , have potentially devastating side effects for people, animals, and the environment . So a Swedish company named Ekobot AB has introduced a wheeled robot that can autonomously recognize and pluck weeds from the ground rapidly using metal fingers.

    The four-wheeled Ekobot WEAI robot is battery-powered and can operate 10–12 hours a day on one charge. It weighs 600 kg (about 1322 pounds) and has a top speed of 5 km/h (2.5 mph). It's tuned for weeding fields full of onions, beetroots, carrots, or similar vegetables, and it can cover about 10 hectares (about 24.7 acres) in a day. It navigates using GPS RTK and contains safety sensors and vision systems to prevent it from unintentionally bumping into objects or people.

    To pinpoint plants it needs to pluck, the Ekobot uses an AI-powered machine vision system trained to identify weeds as it rolls above the farm field. Once the weeds are within its sights, the robot uses a series of metal fingers to quickly dig up and push weeds out of the dirt. Ekobot claims that in trials, its weed-plucking robot allowed farmers to grow onions with 70 percent fewer pesticides. The weed recognition system is key because it keeps the robot from accidentally digging up crops by mistake.

    Read 4 remaining paragraphs | Comments

    • chevron_right

      Stability AI releases Stable Video Diffusion, which turns pictures into short videos

      news.movim.eu / ArsTechnica · Monday, 27 November - 20:28

    Still examples of images animated using Stable Video Diffusion by Stability AI.

    Enlarge / Still examples of images animated using Stable Video Diffusion by Stability AI. (credit: Stability AI)

    On Tuesday, Stability AI released Stable Video Diffusion, a new free AI research tool that can turn any still image into a short video—with mixed results. It's an open-weights preview of two AI models that use a technique called image-to-video, and it can run locally on a machine with an Nvidia GPU.

    Last year, Stability AI made waves with the release of Stable Diffusion , an "open weights" image synthesis model that kick started a wave of open image synthesis and inspired a large community of hobbyists that have built off the technology with their own custom fine-tunings. Now Stability wants to do the same with AI video synthesis, although the tech is still in its infancy.

    Right now, Stable Video Diffusion consists of two models: one that can produce image-to-video synthesis at 14 frames of length (called "SVD"), and another that generates 25 frames (called "SVD-XT"). They can operate at varying speeds from 3 to 30 frames per second, and they output short (typically 2-4 second-long) MP4 video clips at 576×1024 resolution.

    Read 5 remaining paragraphs | Comments