Large Language Model Usage: Assessing The Risks And Ethics


With the ever-expanding use of large language models (LLMs) to generate information for users, there is an urgent need to assess and understand the risks and ethical implications of any given usage. Even seemingly similar uses can have very different risk and ethical profiles. This post will discuss and illustrate with some examples. 

 

Defining Risk And Ethics In The LLM Context

There are a range of risks and ethical considerations surrounding LLM usage which are intertwined with one another. Ethically dubious actions can lead to tangible harms to a user or other stakeholder and legal risk for the organization that enabled the action. At the same time, known shortcomings and risks inherent in LLMs themselves can lead to ethical problems that would not otherwise be a concern. Let’s provide examples of each of these situations before moving on. 

In the case of an ethically dubious actions leading to risk, consider someone looking for how to make a bomb. Structurally and conceptually, this request isn’t any different from asking how to make a salad. LLMs provide instructions and recipes all the time, but providing this specific type of recipe can lead to real harm. LLM providers are therefore striving to block this type of prompt since it is widely considered unethical to answer with a bomb recipe and the risks are clear.

On the flip side, LLM limitations can lead to risks where they otherwise wouldn’t exist. LLMs are known to sometimes get facts wrong. If someone submits a prompt asking for cookie recipe (which is not an inherently risky or unethical thing to ask) but the LLM responds with a recipe that contains a harmful ingredient due to a hallucination, then an ethical problem arises. The specific answer to the otherwise innocuous prompt now has ethical issues because it can cause harm. 

 

Criteria To Assess Use Cases

To determine the ethical and risk profile of any given LLM use case, there are multiple dimensions that should be considered. Let’s consider three core dimensions:

  1. The probability of a user acting on the answer
  2. The risk level of that action 
  3. Confidence in the LLM’s answer 

These three dimensions interact with each other and one or more might fall into a danger zone for either ethics or risk. A complicating factor is that the profile of the use case can change drastically even for very similar prompts. Therefore, while you can assess a use case overall, each specific prompt within the scope of that use case must also be evaluated. In the example above, asking for a recipe sounds innocuous – and generally is – but there are specific exceptions like the bomb recipe. That complexity makes assessing uses much more difficult!

 

How Prompts Can Change The Profile Of A Use Case

Let’s consider a use case of requesting a substitution of an item. On the surface, this use case would not appear ethically fraught or risk laden. In fact, for most prompts it is not. But let’s examine two different prompts fitting this use case can have drastically different profiles.

First, consider a prompt asking for another restaurant to visit since one I’ve arrived at and is closed. There is no risk or ethical problem here. Even if the LLM gives a hallucinated restaurant name, I’ll realize that when I go to look up the restaurant. So, while there is a high probability I’ll act based on the answer, the risk to my action is low, and it won’t matter too much if the answer has low confidence. We’re in the clear from both an ethics and a risk perspective.

Now let’s consider a prompt asking for a substitute ingredient I can put into my casserole to replace something I am out of. I am again likely to act based on the answer. However, that action has risk since I will be eating the food and if an inappropriate substitution is given, it could cause problems. In this case, we need high confidence in the answer because there is high risk if an error is made. There are both ethical and risk concerns with answering this prompt even though the prompt is structurally and conceptually the same as the first one. 

 

How To Manage Your Risks

These examples illustrate how even seemingly straight forward and safe general use cases can have specific instances where things go off the rails! It isn’t just about assessing a high-level use case, but also about assessing each prompt submitted within that use case’s scope. That is a far more complex assessment than we might initially expect to undertake.

This complexity is why LLM providers are constantly updating their applications and why new examples of troublesome outcomes keep hitting the news. Even with the best of intentions and diligence, it is impossible to account for every possible prompt and to identify every possible way that a user might, whether intentionally or not, abuse a use case. 

Organizations must be extremely diligent in implementing guard rails around their LLM usage and must constantly monitor usage to identify when a specific prompt injects risk and/or ethical concerns where there usually would be none. In short, assessing the risk and ethics of an LLM use case will be a complex and ongoing process. It doesn’t mean it won’t be worth the effort, but you must go in with your eyes wide open to the effort it is going to take.

 

Originally posted in the Analytics Matters newsletter on LinkedIn

The post Large Language Model Usage: Assessing The Risks And Ethics appeared first on Datafloq.



Source link