Posts tagged with "Artificial intelligence (AI)"

Research shows AI dog personality algorithm could match you with new ‘best friend’

February 8, 2024

A multi-disciplinary research team specializing in canine behavior and artificial intelligence has developed an AI algorithm that automates the high-stakes process of evaluating potential working dogs’ personalities. They hope to help dog training agencies more quickly and accurately assess which animals are likely to succeed long-term in careers, such as aiding law enforcement and assisting persons with disabilities.

The personality test could also be used for dog-human matchmaking—helping shelters with proper placement and, thus, reducing the number of animals returned for not being a good fit with their adoptive families, reports EurekAlert.

The scientists, from the University of East London and University of Pennsylvania, conducted the research on behalf of their sponsor, Dogvatar, a Miami, Fla.-based canine technology startup. They announced the dog personality testing algorithm results in their paper, “An Artificial Intelligence Approach To Predicting Personality Types In Dogs,” published January 29 in Scientific Reports.

The AI algorithm draws on data from nearly 8,000 responses to the widely used Canine Behavioral Assessment & Research Questionnaire (C-BARQ) to train itself. For over 20 years, the 100-question C-BARQ survey has been the gold standard for evaluating potential working dogs.

“C-BARQ is highly effective, but many of its questions are also subjective,” said co-Principal Investigator Dogvatar CEO ‘Alpha Pack Leader’ Piya Pettigrew

“By clustering data from thousands of surveys, we can adjust for outlying responses inherent to subjective survey questions in categories such as dog rivalry and stranger-directed fear.”

The research team’s experimental AI algorithm works in part by clustering the responses to C-BARQ questions into five main categories that ultimately shape the digital personality thumbprint a given dog receives. These personality types have been identified and described based on analysis of the most influential attributes in each one of the five categories and they include: “excitable/attached,” “anxious/fearful,” “aloof/predatory,” “reactive/assertive,” and “calm/agreeable.”

The data points that feed into those ultimate clusters include behavioral attributes such as “excitable when the doorbell rings,” “aggression toward unfamiliar dogs visiting your home,” and “chases or would chase birds given the opportunity.”

Each attribute is given a “feature importance” value, which is essentially how much weight the attribute receives as the AI algorithm calculates a dog’s personality score. “It’s rather remarkable; these clusters are very meaningful, very coherent,” Serpell said.

Dogvatar and its collaborating researchers intend to conduct further research into potential applications for their dog personality testing algorithm.

“This has been a really exciting breakthrough for us,” said Dogvatar CEO “Alpha Pack Leader” Piya Pettigrew. “This algorithm could greatly improve efficiency in the working dog training and placement process, and could help reduce the number of companion dogs brought back to shelters for not being compatible. It’s a win for both dogs and the people they serve.”

Research contact: @EurekAlert

Biden signs sweeping executive order regulating AI

November 1, 2023

President Joe Biden is directing the U.S. government to take a sweeping approach to artificial intelligence (AI) regulation—his most significant action yet to rein in an emerging technology that has sparked both concern and acclaim, reports Crain’s New York Business.

The lengthy executive order—released on Monday, October 30—sets new standards on security and privacy protections for AI, with far-reaching impacts on companies. Developers such as Microsoft, Amazon, and Google will be directed to put powerful AI models through safety tests and submit results to the government before their public release.

The rule, which leverages the U.S. government’s position as a top customer for big tech companies, is designed to vet technology with potential national or economic security risks, along with health and safety. It will likely only apply to future systems—not those already on the market—a senior administration official said.

The initiative also creates infrastructure for watermarking standards for AI-generated content, such as audio or images, often referred to as “deepfakes.” The Commerce Department is being asked to help with the development of measures to counter public confusion about authentic content.

The administration’s action builds on voluntary commitments to securely deploy AI adopted by more than a dozen companies over the summer at the White House’s request; and its blueprint for an “AI Bill of Rights,” is a guide for safe development and use.

All 15 companies that signed on to those commitments, including Adobe and Salesforce, will join the president at a signing ceremony at the White House on Monday, along with members of Congress.

Biden’s directive precedes a trip by Vice President Kamala Harris and industry leaders to attend a U.K.-hosted summit about AI risks—giving her a U.S. plan to present on the world stage.

The United States set aside $1.6 billion in fiscal 2023 for AI—a number that’s expected to increase as the military releases more detail about its spending, according to Bloomberg Government data.

“This executive order sends a critical message: … AI used by the United States government will be responsible AI,” International Business Machines Corp. Chairman and Chief Executive Officer Arvind Krishna said in a statement.

Biden also called for guidance to be issued that safeguards Americans from algorithmic bias in housing, in government benefits programs, and by federal contractors.

The Justice Department warned in a January filing that companies that sell algorithms to screen potential tenants are liable under the Fair Housing Act if they discriminate against Black applicants. Biden directed the department to establish best practices for investigating and prosecuting such civil-rights violations related to AI, including in the criminal justice system.

The order also asks immigration officials to lessen visa requirements for overseas talent seeking to work at American AI companies.

While the administration is touting its latest actions as the government’s most robust advancement of AI regulation, Congress may go further.

Biden has called on lawmakers to pass privacy legislation, though he doesn’t yet have a position on how Congress should approach comprehensive regulation of AI, the administration official said.

Senate Majority Leader Chuck Schumer called for America to spend at least $32 billion in the coming years to boost AI research and development.

Lawmakers have been holding briefings and meeting with tech representatives, including Meta Platforms’ Mark Zuckerberg and OpenAI’s Sam Altman, to better understand the technology before drafting legislation.

Research contact: @crainsny

New tool can diagnose Type 2 diabetes using just ten seconds of your voice

October 24, 2023

A new type of artificial intelligence (AI) requires only 6-10 seconds of a voice clip to diagnose Type 2 diabetes—offering a potential breakthrough in screening for the disease, reports Study Finds.

This novel diagnostic method, which has been labeled a “potential game changer,” enables individuals to screen themselves for the disease by simply uttering a few sentences into their smartphones.

The study merges voice technology with artificial intelligence. Developed by Klick Labs in Toronto, the test has an accuracy rate of 89% for women and 86% for men. The technology uses between six and ten seconds of voice recording; and basic health data, such as age, gender, height, and weight. This information feeds into an AI model designed to determine if an individual has Type 2 diabetes.

For the study, 267 participants, identified as either non-diabetic or Type 2 diabetic, were instructed to record a specific phrase on their smartphones six times a day over a span of two weeks. From the amassed 18,000+ recordings, scientists examined 14 distinct acoustic attributes to discern differences between the two groups.

The findings, published in Mayo Clinic Proceedings: Digital Health,

delve deep into vocal characteristics—identifying subtle changes in pitch and intensity that are imperceptible to the human ear. Through advanced signal processing, the researchers could pinpoint vocal alterations caused by Type 2 diabetes, noting that these changes differed between men and women.

“Our research highlights significant vocal variations between individuals with and without Type 2 diabetes and could transform how the medical community screens for diabetes,” says Klick scientist Jaycee Kaufman, the paper’s lead author. “Current methods of detection can require a lot of time, travel, and cost. Voice technology has the potential to remove these barriers entirely.”

Globally, nearly half of the 480 million adults with diabetes are unaware of their condition. Furthermore, approximately 90 percent of all diabetic cases are Type 2.

“Our research underscores the tremendous potential of voice technology in identifying Type 2 diabetes and other health conditions,” says Yan Fossat, VP of Klick Labs and the study’s principal investigator. “Voice technology could revolutionize healthcare practices as an accessible and affordable digital screening tool.”

He further notes its potential applications, including tests for high blood pressure, prediabetes, and various women’s health issues.

Research contact: @StudyFinds

In the pink: AI gives droll Barbie and Ken makeovers to Princess Kate, Prince William, and Joe Biden

July 25, 2023

Your favorite celebrities and politicians—“Barbified.” Ever wonder what Joe Biden would look like in a Barbie World? You’re in luck: An enterprising film editor is cashing in on the rabid “Barbie” movie craze by giving the U.S. president and other A-listers Mattel-inspired makeovers with the aid of artificial intelligence (AI), reports the New York Post.

“I absolutely loved making these photos—I was laughing the whole way through,” freelancer Duncan Thomsen, 53, said of his star-studded AI “Barbiefication” campaign.

The U.K. native, who is a “big fan of Ryan Gosling,” told South West News Service he was inspired to transform celebs into Barbies and Kens considering the hype surrounding Greta Gerwig’s much-anticipated live action film, which dropped on Friday, July 21, in movie theaters across America.

Also, “who wouldn’t love a Barbie makeover?” Thomsen declared.

To bring famous figures to life in simulated plastic, the digital wizard turned to scarily sophisticated AI software Midjourney, which responds to user prompts and commands—and generates pics by cross-referencing billions of online images.

This process took some time as AI— despite rendering us obsolete in every sector from academia to life partners— requires super specific commands with an “absolute description,” Thomsen explained.

Thankfully, the freelancer’s project paid dividends as he was able to create a variety of celebrity doll-ppelgangers.

Perhaps the highlight was U.S. Commander-in-Chief Joe Biden reimagined as Ken with the trademark fufu pink regalia, six pack abs, and a pink car.

“When has an American president ever had a six pack on show before?” chortled Thomsen, who gave a similar treatment to former U.S. President Barack Obama. (Barack Obarbie?)

AI might not be able to replace our leaders yet, but it can give them a helluva makeover.

Others include former UK prime minister Margaret Thatcher gussied up in a pink pantsuit and a Barbie version of Princess of Wales Kate Middleton that looked like the royal was cursed by a palm reader she’d spurned.

On the plus side it looked more lifelike than Middleton’s facsimile at the Krakow Wax Museum.

“What you want to do is capture the person or place’s unique essence; then, bring the Barbie features in. That’s when it starts to looks really good,” described Thomsen.

He also used digital software to turn real world buildings into a Barbie Dreamhouse.

The Brit summed it up like this: “Creating these images is great fun, so I thought I’d give everyone a dash of pink and ‘Barbie up’ the whole world.”

Unfortunately, not all AI-generated images are so fun and frivolous. In the past, hyper-realistic generative tech had been used for nefarious purposes—from faking images of President Trump getting arrested by the police to creating pics of a Pentagon explosion (the latter of which resulted in a brief stock selloff).

esearch contact: @nypost

Popular Instagram photographer confesses that his work is AI-generated

March 1, 2023

As more and more AI-generated images flood the Internet, you might start thinking that it is easy to tell what is real and what isn’t. For instance, too many fingers or the appearance of random limbs is one obvious giveaway. But, the work of popular Instagram photographer Joe Avery drives home the point that the line between AI imagery and work created by actual photographers is becoming more and more blurred, reports My Modern Met.

Avery’s admired “portrait photography” has recently unraveled with the photographer’s own admission of his work being entirely AI-generated. His confession also brings up questions of when and how to disclose the use of AI in content creation.

A ‘portrait’ by Joe Avery. (Photo source: My Modern Met)

Avery opened his portrait photography account on Instagram last October. And in just a few short months, his stunning black-and-white photographs amassed a following of about 12,000 people. But what his followers, who wrote enthusiastic comments about how much his work inspired them, didn’t realize is that Avery hadn’t picked up a camera at all. All of his images were created using Midjourney and then retouched by him.

In early January, feeling “conflicted” about deceiving his followers, he came clean to the online publication, Ars Tecnica, via email. “[My Instagram account] has blown up to nearly 12K followers since October, more than I expected,” he wrote. “Because it is where I post AI-generated, human-finished portraits. Probably 95%+ of the followers don’t realize. I’d like to come clean.”

Avery went on to clarify that while his original intent was to fool his followers and then write an article about it, he’d come to enjoy the process of creating these AI images and saw it as a creative outlet that he wanted to share openly. Though Avery’s account now clearly states in the bio that the images are AI and that he is creating digital art, that was not always the case.

In fact, prior to his confession, Avery remained vague about the origins of the images and frequently replied to comments by followers praising his work. The account has now deleted all user comments, but PetaPixel published screen captures of these interactions.

Under one image, a portrait photographer who followed the account wrote, “Thank you for the inspiration you provide day after day with your wonderful portraiture. I stop, take a long look, reflect, and most certainly learn from every post you share.” Avery simply replied, “Thanks very much for taking the time to share that. It means a lot.”

In another instance, someone outright asked Avery what equipment he used to shoot his photos and, instead of stating that they are AI-generated, he answered that he uses Nikon. However, Avery told Ars Tecnica that as his following grew, he started feeling guilty about the deception.

“It seems ‘right’ to disclose [AI-generated art] many ways—more honest, perhaps,” Avery shared. “However, do people who wear makeup in photos disclose that? What about cosmetic surgery? Every commercial fashion photograph has a heavy dose of Photoshopping, including celebrity body replacement on the covers of magazines.”

Of course, techniques to hide certain things or create illusions have long been part of most art forms; but, as Ars Tecnica points out, “misrepresenting your craft is another thing entirely.” Now that he’s come clean, Avery will find out how the public views his deception.

For his part, Avery does see his work as a form of creativity. In explaining his creative process, he stated that he generated nearly 14,000 images using Midjourney in order to arrive at the 160 posted to Instagram. He then combines the best parts of the generated images and retouches them in Lightroom and Photoshop to achieve a realistic look.

“It takes an enormous amount of effort to take AI-generated elements and create something that looks like it was taken by a human photographer,” Avery shares. “The creative process is still very much in the hands of the artist or photographer, not the computer.”

These works of digital art certainly do look like real photos. Given what we’ve seen in terms of unedited AI imagery, a lot of hours were surely spent to make sure that certain aspects like the eyes and hands look real. Many of Avery’s images are also accompanied by a short fictional story about the person pictured. These words certainly enhance the imagery and were likely part of why his account gained popularity.

But now that he’s confessed that these images are digital art and not his own photography, the question is what will the response be? Will people not care and will his following continue to grow? Or will people, particularly other photographers, turn their back on this form of deception?

Currently, he has nearly 28,000 followers and continues to post frequently. While his Instagram biography refers to AI and digital art, he continues to use popular photography hashtags like #peoplephotography on his images, with no hashtags mentioning AI, Midjourney, or digital art.

Avery’s case is an interesting one and could understandably instill fear in photographs who look at AI as yet another way they could lose work. If Avery’s Instagram followers couldn’t tell the difference, that means that advertisers and other paying clients probably also would not have. It’s not difficult to see how we might not be too far away from digital art replacing photography in some scenarios.

Research contact: @mymodernmet

Amazon updates AI after Alexa tells ten-year-old to try ‘penny challenge’

December 31, 2021

Amazon updated its Alexa artificial intelligence (AI), reports The Hill, after a user posted that an Amazon device told her child to do what is known as the “penny challenge,” a company spokesperson said on Thursday, December 30.

“Customer trust is at the center of everything we do and Alexa is designed to provide accurate, relevant, and helpful information to customers,” the spokesperson said in a statement, adding, “As soon as we became aware of this error, we quickly fixed it, and will continue to advance our systems to help prevent similar responses in the future.”

The update comes a few days after a user on Twitter, Kristin Livdahl said her ten-year-old asked Alexa on an Echo device for a challenge. In response, Alexa gave information about what is known as the “penny challenge.”

“Here’s something I found on the web. According to ourcommunitynow.com: The challenge is simple: plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs,” Alexa responded, according to the screenshot shared on Twitter.

The “penny challenge” trend gained popularity on the video sharing app TikTok, but some fire departments have been warning about potential dangers that it poses. Captain Brian Tanner with the Provo Fire Department in Utah posted a TikTok in January warning users against the challenge.

Livdahl explained that the mother and daughter were engaged in physical challenges, such as “laying down and rolling over. holding a shoe on your foot, from a [physical education] teacher on YouTube earlier.

Livdahl said that her timely intervention helped avert a disaster.

“I was right there and yelled, “No, Alexa, no” like it was a dog. My daughter says she is too smart to do something like that anyway.”

Research contact: @thehill