25/03/6
generative ai models 2
Lawsuit Alleges Microsoft Trained AI on Private LinkedIn Messages
Cisco Attacks Security Threats With New AI Defense Offering
The primary goal of icebreakers is to establish a connection with whomever you happen to meet, spark interest, and set a comforting foundation for engaging in a dialogue. Appearing at ease when employing an icebreaker is paramount to the process. Ideally, you might want to bounce off a friend or confidant the icebreakers that you intend to use. This person or advisor could help in refining the icebreakers. They would also serve to reject ones that might seem wonderful to you but are going to be disastrous if used in actual practice. Be cautious in trusting what you see and hear from AI, even if the AI is pretending to be you.
Meanwhile, smaller models like Falcon2-11B proved to be resource-efficient alternatives for targeted tasks, maintaining competitive accuracy without the extensive computational demands of larger models. The key to all usage of generative AI is to stay on your toes, keep your wits about you, and always challenge and double-check anything the AI emits. A generative AI conversation can continue for as long as you wish.
Today, companies need specialized security solutions that protect AI systems and their components from various security threats (e.g., adversarial attacks) and vulnerabilities (e.g., data poisoning). These security products must protect the data, algorithms, models, and infrastructure involved in AI applications. As companies develop new AI applications, developers need a set of AI security and safety guardrails that work for every application.
Performance evaluation and insights
The plaintiff seeks $1,000 in damages and possibly more relief as compensation. “This new development not only enhances the experience for our customers but also demonstrates our dedication to integrating the transformative potential of AI. Moving forward, incorporating AI-generated content will also be a lever for us to further increase efficiency, flexibility and personalization in future content creation,” the spokesperson said.
The poll found that responsible AI is important to executives, with 87% of respondents rating it a high or medium priority for their organization. Notice that I questioned the generative AI about its seemingly downbeat advice. Fortunately, the AI opted to back down and admitted it was wrong. Had I not questioned the AI, there is a chance the AI might have continued with the foul advice, and I could have gotten myself into an even greater funk. Don’t let anyone bamboozle you into thinking that generative AI is going to be the best thing since sliced bread when it comes to finding ways to overcome imposter syndrome.
With the average company using over 76 security products, security teams need simplicity. Cisco AI Defense aligns with established industry standards, making it easier for organizations to meet regulatory requirements and demonstrate compliance during audits. In today’s column, I examine the use of generative AI and large language models (LLMs) to aid those who are experiencing imposter syndrome.
Generally, the idea is that sometimes a person feels as though they are doubtful of their abilities, including even believing themselves to be essentially a fraud when it comes to their self-worth. This is a surprisingly common qualm and can be quite debilitating. Large Language Models, including prominent examples like GPT-4, Falcon2, and BERT, have brought groundbreaking capabilities to cybersecurity. Their ability to parse and contextualize massive amounts of data in real time allows organizations to detect and counteract a wide range of cyber threats. Whether analyzing network traffic for anomalies or identifying phishing attempts through advanced natural language processing (NLP), LLMs have proven to be invaluable tools. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).
‘Reasoning’ AI models have become a trend, for better or worse – TechCrunch
‘Reasoning’ AI models have become a trend, for better or worse.
Posted: Sat, 14 Dec 2024 08:00:00 GMT [source]
Over the past several years, the security landscape rapidly evolved with the introduction of AI, specifically generative AI. AI spawned numerous new categories of AI cyber threats, such as data inference, transfer learning attacks and model inversion. Additional, AI-enhanced phishing attacks are driving increased breaches and data loss.
Addressing security vulnerabilities in LLMs
Several companies offer software that provides explainability for generative AI models, however, banks need to do their own testing. OpenAI’s partnership with Microsoft “provided us with a high degree of comfort,” said Chris Nichols, director of capital markets at the bank. “We then completed further due diligence on the quality of their models and usability.” Banks have to navigate these risks, starting with proper vetting and due diligence of companies and their models.
The use of icebreakers is a common social mechanism that can be used with people that you’ve newly met. A lousy icebreaker could land like a dud and forever leave a foul impression on the other person. The words uttered by this AI persona might be some canned dialogue that has nothing to do with you and doesn’t mimic your speaking style or vocabulary. That though could be a deal-breaker in terms of inspiring you to buy the product or service at hand.
This is a bridge too far concerning the upright and sensible use of contemporary AI. In the coming months, Pipeshift will also introduce tools to help teams build and scale their datasets, alongside model evaluation and testing. This will speed up the experimentation and data preparation cycle exponentially, enabling customers to leverage orchestration more efficiently. However, the complaint did not state that the plaintiffs have evidence of the shared InMail contents.
Cisco’s latest announcement of AI Defense showcases how the intersection of AI and cybersecurity requires an evolution of a company’s security strategy. By addressing the unique risks posed by AI applications and providing tools tailored to the needs of SecOps teams, Cisco has positioned itself as a contender in the new AI security realm. Cisco AI Defense can implement policies restricting employee access to unsanctioned AI tools. It allows organizations to enforce policies on how AI applications are accessed and used, ensuring compliance with internal and external regulations.
- Microsoft is one of the biggest investors and developers in the AI space, but it’s not the only one—see the others on our list of the top AI companies to better understand who is defining this dynamic technology.
- With the average company using over 76 security products, security teams need simplicity.
- It allows organizations to enforce policies on how AI applications are accessed and used, ensuring compliance with internal and external regulations.
- The Hugo Boss spokesperson said the company believes that using generative AI to present its products will provide a stronger customer experience, particularly as it continues to iterate.
- It might seem that some people are naturally able to start conversations.
Marketers certainly think so, and marketing studies bear out this possibility. We are daily bombarded with advertising that showcases this popular person or that big-time superstar in hopes of garnering our attention and our wallets. Again, you can give credit where credit is due, in the sense that if someone can enhance their thinking processes by making use of generative AI, we should probably laud such usage. The issue is that this goes beyond the norm and at times enters a Twilight Zone. The person becomes overly preoccupied with trying to think as AI “thinks” and they even come to believe that AI is sentient (we don’t have sentient AI yet).
Cisco AI Defense delivers tangible benefits to stressed SecOps teams by offering enhanced visibility, streamlined security management, and proactive threat mitigation. For example, the platform provides detailed insights into AI application usage across the enterprise to improve visibility into AI-powered apps and workflows. Security teams can detect and analyze potential vulnerabilities in real-time by monitoring network traffic and API interactions.
Alex posted on his social media that he welcomes suggestions from those who have owned smartwatches. I fed into generative AI that content and some other content that Alex had posted online. I instructed generative AI to pretend to be Alex and try to sell Alex on going through with a smartwatch purchase. It seems relatively apparent that generative AI could simulate Lincoln since there is tons of content online that depicts what he was like.
The odds are that the AI persona will be convincing since it will be as though you are staring right back at yourself. Does the AI that was directed to do this have a legal path to use the likeness of the person? A twist is that suppose the AI persona is only directed at the person being mimicked.
In the past, other brands have taken heat from consumers for choosing AI-generated content over human-first content; in 2024, Selkie’s decision to use AI to help design a Valentine’s Day collection saw criticism from consumers. In 2023, Levi’s saw anger from consumers after saying it would use AI to generate images of models with more diverse body types and a wider variety of skin tones. Companies like WHP Global, Adore Me, Eileen Fisher, Mango and others have experimented with using generative AI to create digital models for their product detail pages (PDPs). They have partnered with a range of third-party companies, including AI.Fashion and Veesual. “As a strategic investment unit, we want to make sure that the companies that we invest in are a fit for us, not just from an investment standpoint but also from a commercial usage standpoint,” Purushotham said.
If you go with hyperscaler services, like Vertex AI, you’re locked into a specific cloud. On the other hand, if you go solo and build in-house, there’s the challenge of resource constraints as you have to set up a dozen different components just to get started, let alone optimizing or scaling downstream. Google has released its own reasoning model, Gemini 2.0 Flash, and other tech firms probably will, too. Customers will be able to draw on multiple models from different providers. And although generative-AI models may improve a little through their interactions with customers, they lack true network effects, unlike the products Google and Facebook made in the past era.
In that sense, the product or service is not entirely out of the blue. Inspecting the ad, you decide that maybe now is the time to make that purchase. If we had only this snippet of a conversation, the odds are that we would not be on alert that the person is going overboard on their AI usage.
Cisco AI Defense helps developers protect AI systems from attacks and safeguards model behavior across platforms. Security teams must understand who is building applications and the training sources for these new applications. Cisco AI Defense provides security teams with visibility into all third-party AI applications used within an organization, including tools for conversational chat, code assistance, and image editing.
You have to set up 10 different inference components and instances to get things up and running and then put in thousands of engineering hours for even the smallest of optimizations. EWeek has the latest technology news and analysis, buying guides, and product reviews for IT professionals and technology buyers. The site’s focus is on innovative solutions and covering in-depth technical content. EWeek stays on the cutting edge of technology news and IT trends through interviews and expert analysis. Gain insight from top innovators and thought leaders in the fields of IT, business, enterprise software, startups, and more.
In other words, the AI persona is not used to try and sell anything to anyone else. It is a one-of-a-kind AI persona that is devised solely to appeal to the person being mimicked. Speaking of tradeoffs, there are apparent AI ethical and AI legal issues concerning this use of AI personas. A colleague that I’ll refer to here as Alex has recently mentioned that he was considering buying a smartwatch. He had been looking at various online reviews and visiting websites that discussed the ins and outs of smartwatches.
Yes, the advent of generative AI has fostered a segment of users who are quite infatuated with AI. This is especially disconcerting because it is seemingly happening at scale. On the other hand, the expression is sometimes used as a wake-up call. Someone who cares about what is happening could be trying to hint that there is something untoward arising. The catchy phrase about living in your head rent-free allows them to warn in a less threatening manner.
Jan Philipp Wintjes, executive vice president of global omnichannel at Hugo Boss, shared on his LinkedIn that the company had started using generative AI to create images and video. Dynamo AI, a company that participated in last year’s fintech lab, offers help with uncovering gen AI hallucinations, Gotsch said. If a model passes the accuracy test, it’s then tested for privacy, security, toxicity and the ability to be jailbroken. These tests usually make up the other 50% of the questions, he said. “At least half the questions are for accuracy to limit hallucinations,” Nichols said. “We have subject matter experts provide both questions and expected answers and then we review and rate how well the model does with its output.”
“They had a lot of interest from people wanting to know about them,” she said. “If they’re using a tool and got these results, if they can’t explain it, they can’t use it,” Gotsch said. This is why for the past two years banks have been piloting generative AI internally, she said. In Gotsch’s view, the biggest question about generative AI models as well as more traditional AI models is explainability.
Nearly any well-known figure in history could potentially be simulated or mimicked by modern-day AI. There are more ways that AI is going to potentially be in our minds. For example, you might be aware of the various brain-machine interfaces (BMI) that are being developed and gradually being fielded (if interested, see my review at the link here). These specialized devices are intended to marry the human mind with the capabilities of computing-based AI.
You tell the AI in a prompt that the AI is to pretend to be a person who is experiencing imposter syndrome but doesn’t know what to do about it. The AI then will act that way, and you can try to guide the AI in positively rejuvenating. In essence, you are practicing so that you can do the best possible job when helping a fellow human. For more about how to tell generative AI to carry out a pretense, known as an AI persona, see my coverage at the link here.
Excitement from those who expected its reasoning capabilities to be a big step towards superhuman intelligence. Scepticism because OpenAI did not release it to the public and had every incentive to overplay the firm’s pioneering role in AI to curry favour with Donald Trump, the incoming American president. A potential concern when using generative AI is the possibility of privacy intrusions. Whatever you enter into generative AI is not necessarily going to be treated in any confidential way. The roughest angle to imposter syndrome seems to be a potentially vicious cycle that can ensue. Something happens that they interpret as reinforcing the syndrome.
“So we look at whether the company is targeting its products and services to large financial institutions.” Many do. Mr Chollet set a limit of $10,000 on the amount that contestants can spend on computing power to answer the 400 questions in his challenge. When OpenAI put forward a model under the limit, it spent $6,677 (about $17 per question) to score 82.8%. The score of 91.5%, achieved by o3, came from blowing the budget.
Observe how AI opted to use a logic-based argument to persuade Alex. That fits Alex’s style of writing based on the content scanned by the AI and is a better approach than trying to make an emotional appeal in this instance. If the content reviewed by generative AI to enact the persona had seemed more emotionally based, the sales pitch would have gone in that direction instead. One of the most popularly invoked personas entails generative AI pretending to be Abraham Lincoln. A teacher might tell a generative AI app such as ChatGPT to simulate the nature of Honest Abe.
Note that the AI immediately expressed a sense of empathy or understanding for my expressed concerns. This might seem strange since the AI is a machine and not sentient (we don’t have sentient AI yet). Turns out that generative AI can appear to be empathetic via computational wordsmithing, see my discussion at the link here. It might seem that some people are naturally able to start conversations.
- The study calls for a multi-faceted approach to enhance the integration of LLMs into cybersecurity.
- When you have to run different models, stitching together a functional MLOps stack in-house — from accessing compute, training and fine-tuning to production-grade deployment and monitoring — becomes the problem.
- Make sure to give scrutiny to anything AI says, and anything that humans say about AI.
The threat of sensitive corporate data leakage into open foundation models is both real and pervasive. Meanwhile, advanced data theft attacks and proprietary corporate information data poisoning are examples of burgeoning AI security threats. Cisco’s AI Defense offers security teams visibility, access control and threat protection. The major difference between Hugo Boss’s use of generative AI in this context, when stacked up against other fashion companies, is its ability to use video. To date, the majority of AI-generated content created for well-known fashion and apparel companies’ PDPs, advertising campaigns or marketing strategies has been composed of still images.
I am betting that you would like to see an example of how generative AI enters this realm. The issue though is that finding someone willing to spend the time to do so might be difficult. Furthermore, having to admit to that person that you are struggling with icebreakers might be a personal embarrassment. The additional issue is that you might suddenly think of an icebreaker late at night and want to immediately test it out.
Instead of domination by one firm, some expect model-making to be more like an oligopoly, with high barriers to entry but no stranglehold—or monopoly profits. For now, OpenAI is the leader, but one of its main rivals, Anthropic, is reportedly raising money at a $60bn valuation, and xAI, majority-owned by Elon Musk, is worth $45bn. With o3 OpenAI has demonstrated its technical edge, but its business model remains untested. When OpenAI announced a new generative artificial-intelligence (AI) model, called o3, a few days before Christmas, it aroused both excitement and scepticism.
I have repeatedly cautioned that society is in a grand loosey-goosey experiment, and we are all guinea pigs when it comes to the widespread usage of generative AI and LLMs. This especially comes up when considering the mental health outcomes of using AI. In today’s column, I unpack the famous saying proffering that you shouldn’t let things live in your head rent-free, which in this instance can be applied to the advent of generative AI and large language models (LLMs). Some people are obsessing over generative AI and going to extremes. They are allowing modern AI LLMs to demonstrably shape their lives.