Would you trust a robot lawyer?


A new survey for Robin AI reveals that while nearly one in three people would be open to letting a robot lawyer represent them in court, the vast majority would only do so if a human lawyer was overseeing the process.
The research carried out by Perspectus Global polled a sample of 4,152 people across the US and UK and finds that on average, respondents say they would need a 57 percent discount to choose an AI lawyer over a human.
Overcoming the skills gap with robust, easy-to-use AI


When it comes to adopting new technologies, the legal sector has traditionally been more cautious than other industries. However, AI’s potential to transform legal workflows and unlock new levels of productivity is difficult to ignore. In fact, the industry is moving at speed: a recent study shows almost three quarters (73 percent) of legal practitioners plan to utilize AI in their legal work within the next year.
On a practical level, AI is evolving so quickly that across many practices, employees have varying levels of knowledge and understanding of how AI works, what tasks they should be using it for and the legal implications of using it. At the same time, if firms introduce AI solutions that require deep technical knowledge to use, skills gaps could become increasingly problematic.
Is AI a double-edged sword for lawyers?


The legal industry is not traditionally recognized as one that is quick to embrace change, but recently, some professionals have been embracing emerging technology maybe a little too quickly, leading to all kinds of problems. The use of generative AI tools has exploded in popularity since OpenAI’s ChatGPT debuted in late 2022, and some lawyers have turned to this generative AI (GenAI) technology to help them with everything from legal research to contract drafting.
However, these GenAI models aren’t foolproof. In fact, they’re likely to “hallucinate” information that seems accurate but is actually entirely made up. If lawyers using this tech don’t take the time to double-check their outputs, they run the risk of working with factually incorrect information, which is embarrassing at best and grounds for legal repercussions at worst.