Heard on the Street – 12/18/2023


Welcome to insideBIGDATA’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Click HERE to check out previous “Heard on the Street” round-ups.

Generative AI: The New Enterprise Engine. Commentary by Gregory Whiteside, CEO of HumanFirst

“Generative AI is becoming generally available. Currently, there are two ways to use it fist as creative, one-off use cases, Fast lane to get where you’re going and secondly as repeatable, automatable functions → Function([data]), a new & improved way of always getting where you need to go. The latter is where all the value lies for the AI-enabled enterprise.

In its maturity, Gen AI will function less like a fast lane (accelerating time to arrival at a pre-determined destination) and more like an engine (a method by which companies will take every customer where they need to go across the entire customer journey). We’ll see an inverted curve between vertical tooling (early-to-market solutions designed for specific use cases, built like roads) and base-layer, Excel-like tooling that helps companies build their own engines by organizing their own data libraries and prompt catalogs to build AI on truth, through training, & with trust.”

Companies aren’t ready for Copilot. Commentary by BigID‘s CISO, Tyler Young

“The good thing about Copilot is that it leverages closed-loop AI models, which eliminates some of the generative AI data leak risks. However, most enterprises still don’t have enough safeguards in place when it comes to genAI even if Copilot uses a closed-loop AI model.

The best strategy for organizations looking to deploy genAI is for companies to purchase business licenses to AI models to provide sanctioned use of AI (limiting access based on user roles, etc.). Additionally, Organizations need to ensure they have a deep understanding of the data that they are sending to AI models. They can accomplish this by having data repositories or staging areas where they can scan their data for sensitive information and remove this data before it’s sent to the AI model. More can be done to secure sensitive company data when it comes to embracing AI but this is a start.” 

How the rise of generative AI is creating a new opportunity for digital mavericks. Commentary by Moe Tanabian, Chief Product Officer at Cognite

“Data-heavy industries like oil and gas, manufacturing, and varying technology sectors are realizing that Generative AI is a game changer for contextualizing an organization’s data and delivering AI-powered use cases to business users. Companies are now turning to a new breed of industrial leaders: digital mavericks. 

Going against the grain of the traditional CIO role and responsibilities, self-identifying digital mavericks are ambitious and intelligent individuals who see an opportunity for digital transformation and want to take advantage of it in a career-defining way. As Generative AI becomes more critical for companies and mainstream society, businesses are looking to digital mavericks to maximize the era of Gen AI by transforming their organization to thrive on agility, scalability, and an accelerated pace of innovation. To best support digital mavericks, organizations should allow these leaders to realign digital KPIs, challenge the traditional thinking of “do it yourself” (DIY) technology projects, and pursue a “land and expand” approach to identify new technologies in the context of high-value use cases. Having an essential partner, like a digital maverick, is crucial to the success of Gen AI for industry.”

The EU Takes the Lead in Ethical AI Standardization. Commentary by Achim Weiss, CEO at IONOS

“AI regulation is at the forefront of global political agendas, and for good reason. Global investments in emerging tech like AI have shed light on a new era: one defined by digital transformation and limitless possibilities for today’s enterprise landscape. Harnessing the power of AI, however, has become a controversial topic leaving thought leaders with one question: what are the risks?

The EU is leading the way for ethical AI standardization with the creation of The AI Act, which will drive risk-mitigation across AI implementation within European nations. Companies who prioritize the responsible use of AI tools are destined to lead in today’s world defined by emerging technologies and digital innovation.”

AI regulation: Commentary by Pukar Hamal, Founder & CEO, SecurityPal

“As new AI regulations are introduced in the U.S. and abroad, it’s critical for organizations to adopt a proactive approach to ensuring their AI implementation is compliant with existing and forthcoming standards, while still driving value for their business. Without the appropriate systems and processes in place to assess implementations and assure compliance, it can sometimes be too time-consuming for business leaders to feel that it’s even worth adopting new tech — as a result, they can miss out on the time and resources saved, and innovation fueled, by cutting edge AI solutions.

Remember: AI regulation is important, but it’s not designed to stifle innovation altogether. Lean on security and compliance experts to ensure that you’re making the most of disruptive tech while avoiding any GRC hangups.”

AI Summit. Commentary by Ekaterina Almasque, General Partner at OpenOcean

“The mere act of assembling so many heavyweights from the world of AI is a big achievement for the UK. However, it is far from certain whether the AI summit will have any lasting impact. It looks likely to focus mostly on bigger, long-term risks from AI, and far less on what needs to be done, today, to build a thriving AI ecosystem. It’s like a startup worrying about its IPO price before it’s raised seed funding.

UK should be under no illusions: it is one of many countries seeking to direct the future of AI. Biden’s Executive Order, the AI summit in Paris later in November, the EU’s incoming AI Act – all the major players are lobbying for influence over the future of AI innovation, regulation, and governance.

A successful future for AI is not about competition, and countries going it alone. In order to properly regulate AI in a way that fosters innovation, we need to make every effort to connect investors, startups, and policymakers. This involves increasing R&D budgets, creating sovereign funds to support strategic initiatives, and attracting top talent into startups.

Going forward, we also must have more voices for start-ups themselves. The AI Safety Summit’s focus on Big Tech, and shutting out of many in the AI start-up community, is disappointing. It is vital that industry voices are included when shaping regulations that will directly impact technological development. Only through open, robust collaboration between the public and private spheres will we see AI realize its full potential.”

Importance of Data Access Controls Within the Modern Era of Identity Security. Commentary by Gal Helemski, CTO/CPO & co-founder, PlainID 

“Data is one of the most valuable assets within an organization, and it should be treated as such. Cyber adversaries have a firm understanding of just how valuable data is and how having access to the right identity can open many doors leading to financial gain for criminals — and headaches for security professionals. Organizations are tasked with balancing controlled growth while also ensuring company data remains secure and monitored. The problem is, in the modern era, data is constantly flowing and is accessible to many people — sometimes even unknowingly. The priority for all companies should be investing in smart systems and solutions that go beyond just controlling access to a database or table and provide full visibility into all the data within the organization so that data access controls can be properly secured.  

To put it in perspective, once you find that a user is compromised — especially one with administrative credentials — you know that a threat actor is already in your network. Without modern tools that provide full visibility and granular access controls for data professionals, your organization will find itself in a situation where you can no longer see where you are compromised. From here, the focus should be on limiting movement to avoid any further damage and risk. 

This is where dynamic authorization comes into play. Traditional access controls are mostly course-grained, they involve plugging in credentials and, if a user has pre-assigned access, being cleared to enter whether they are the real user or a compromised one. Dynamic authorization provides both high-level of granularity and adapt based on the context of access. It accounts for a wide set of variables such as the user and its business context, what the user is trying to access and any environmental variables such as time, location and risk metrics. With highly granular access controls, anytime a user tries to access a portion of data, they will be met with a dynamic authorization process completed in a matter of seconds to determine if the user should have access. This process results in a system that keeps organizational and customer data safe and breaches minimized.”  

Artificial Intelligence is Evolving the Future of Financial Services. Commentary by Jay Hakim, VP of Business Intelligence and Data Engineering Beyond Finance

“Artificial intelligence (AI) has ushered in a new level of digitalization for the financial services industry. A 2020 survey from the World Economic Forum shared that 77% of financial institutions ‘anticipate AI will be of high importance to their businesses within two years.’ 

They were right. 

As someone who oversees data analysis for a leading debt resolution organization, I have seen the power of AI become highly important for any financial institution assisting clients with their economic goals. 

One deliverable from our client service agents summarizes the conversation to represent accurately what was said. AI transcribes and translates the words using large language models (LLM). The agent can accept a summary, which the AI model provides, instead of writing a summary. Furthermore, details can be augmented using an AI-generated summary as a starting place. That starting point saves time, delays, and money, which allows us to provide a solution faster. We could use a similar technology using our chat feature that our customers use to engage with us over chat.  

The finance industry naturally draws, let’s say, “curious” people who are trying to game the system for fraud. We could proactively introduce AI technology to spot peculiar transactions and flag them for additional investigation.

Our clients have gotten themselves into a tough spot. It’s our commitment and job to help them out today and in the future with financial issues and protect them moving forward. Who’s to say we couldn’t upload their expenses and proactively recommend ways they can trim their costs? For example, there may be many video streaming services or restaurant charges. We could suggest their expenses and compare them to some market averages to determine if they are above or below people with similar earnings. AI allows us that vantage point and insight to offer more assistance. 

The future is ours. It’s here to evolve our industry and better serve those who trust us with their finances. With accurate prompts and provided insight, AI is a tool we can use to fix problems and provide solutions to so many people nationwide.” 

Cloud-based data virtualization. Commentary by Amit Sharma, CEO at CData

“In today’s data-rich landscape, effective management is paramount. Data virtualization in the cloud provides a centralized data access solution, integrating multiple data sources through a single point. Through virtualization, an organization’s data stays at its source, relying on the security and compliance measures of the native services or systems and avoiding the risks of duplication. With a unified security model, risks associated with data breaches diminish, allowing for a singular, robust protection protocol. Coupled with the ease of implementing and monitoring compliance policies, data virtualization in the cloud ensures data remains secure and audit-ready, setting the gold standard for modern data management.”

AI is on the Rise in Healthcare: How Providers can Adapt to Patient Demand. Commentary by Branden Neish, Chief Product & Technology Officer at Weave

“AI advancements have quickly taken root across all industries, and healthcare may be the most widely discussed and debated. Patients are becoming accustomed to AI in their daily lives and a recent survey revealed that 67% of consumers believe that AI will soon become a commonly used tool in healthcare. 

Younger generations are more keen on the use of AI in a healthcare setting, with 70% of Millennials and 64% of Gen Z expecting their providers to integrate AI into everyday practice workflows. As consumer expectations shift, providers must prioritize exceptional patient experience, driven by integrated technology, or run the risk of patient attrition. Incorporating AI to simplify appointment scheduling, respond to feedback and create better communication between patients, doctors and staff can result in better access to healthcare, improve accuracy and reduce administrative headaches. Through thoughtful implementation of AI, providers can also make improvements to alleviate ongoing employee burnout and staffing issues.”

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW





Source link