machine learning – Techdirt (original) (raw)

Ctrl-Alt-Speech: Over To EU, Elon

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike is joined by guest host Domonique Rai-Varming, Senior Director, Trust & Safety at Trustpilot. Together they cover:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund.

Filed Under: artificial intelligence, content moderation, deepfakes, elon musk, machine learning, syria
Companies: amazon, etsy, expedia, trustpilot, twitter, x, youtube

How Refugee Applications Are Being Lost In (Machine) Translation

from the AI-not-I dept

As you may have noticed, headlines are full of the wonders of chatbots and generative AI these days. Although often presented as huge breakthroughs, in many ways they build on machine learning techniques that have been around for years. These older systems have been deployed in real-life situations for some time, which means they provide valuable information about the possible pitfalls of using AI for serious tasks. Here is a typical example of what has been happening in the world of machine translation when applied to refugee applications for asylum, as reported on the Rest of the World site:

A crisis translator specializing in Afghan languages, Mirkhail was working with a Pashto-speaking refugee who had fled Afghanistan. A U.S. court had denied the refugee’s asylum bid because her written application didn’t match the story told in the initial interviews.

In the interviews, the refugee had first maintained that she’d made it through one particular event alone, but the written statement seemed to reference other people with her at the time — a discrepancy large enough for a judge to reject her asylum claim.

After Mirkhail went over the documents, she saw what had gone wrong: An automated translation tool had swapped the “I” pronouns in the woman’s statement to “we.”

That’s a tiny difference, and one that today’s machine translation programs can easily miss, especially for languages where training materials are still scarce. And yet the consequences of the shift from singular “I” to plural “we” can have life-changing consequences – in the case above, whether asylum was granted to a refugee fleeing Afghanistan. There are other problems too:

Based in New York, the Refugee Translation Project works extensively with Afghan refugees, translating police reports, news clippings, and personal testimonies to bolster claims that asylum seekers have a credible fear of persecution. When machine translation is used to draft these documents, cultural blind spots and failures to understand regional colloquialisms can introduce inaccuracies. These errors can compromise claims in the rigorous review so many Afghan refugees experience.

In the future it is likely that the number of people seeking asylum will increase, not least because of environmental refugees who are fleeing lands made uninhabitable by climate change. Their applications for asylum elsewhere are likely to involve a wider range of lesser-known languages. Turning to machine translation will be a natural move by the authorities, since it takes time and resources to recruit specialist human translators.

The new generation of AI tools and their high-profile abilities will encourage this trend, as well as their use to evaluate applications and to make recommendations about whether they should be accepted. The Rest of the World article points out that OpenAI, the company that is behind ChatGPT, updated its user policies in late March with the following as “Disallowed usage of our models”:

High risk government decision-making, including:

Governments trying to save money will doubtless use them anyway. It will be important for courts and others dealing with asylum claims to bear this in mind when there seem to be serious discrepancies in refugees’ applications. They may be all in the (machine’s) mind.

Follow me @glynmoody on Mastodon.

Filed Under: afghanistan, ai, asylum, chatbots, chatgpt, climate crisis, machine learning, openai, pashto, refugees, translation
Companies: openai

Thousands Of Academics Pledge To Boycott Springer's New Machine Learning Title In Support Of Long-Established Open Access Journal

from the if-it-ain't-broke,-and-it's-free,-don't-fix-it dept

Among Techdirt’s many stories chronicling the (slow) rise of open access publishing, a number have been about dramatic action taken by researchers to protest against traditional publishers and their exploitative business model. For example, in 2012, a boycott of the leading publisher Elsevier was organized to protest against its high journal prices and its support for the now long-forgotten Research Works Act. In 2015, the editors and editorial board of the Elsevier title Lingua resigned in order to start up their own open access journal. Now we have another boycott, this time as a reaction against the launch of the for-profit Nature Machine Intelligence, from the German publishing giant Springer. Thousands of academics in the field have added their names to a statement about the new title expressing their concerns:

the following list of researchers hereby state that they will not submit to, review, or edit for this new journal.

We see no role for closed access or author-fee publication in the future of machine learning research and believe the adoption of this new journal as an outlet of record for the machine learning community would be a retrograde step. In contrast, we would welcome new zero-cost open access journals and conferences in artificial intelligence and machine learning.

The contact person for the statement is Thomas G. Dietterich, Distinguished Professor (Emeritus) and Director of Intelligent Systems at Oregon State University. He has a long history of supporting open access. In 2001, he was one of 40 signatories to another statement. It announced their resignation from the editorial board of the Machine Learning Journal (MLJ), which was not open access, and their support for the Journal of Machine Learning Research (JMLR), launched in 2000, which was open access. As they wrote:

our resignation from the editorial board of MLJ reflects our belief that journals should principally serve the needs of the intellectual community, in particular by providing the immediate and universal access to journal articles that modern technology supports, and doing so at a cost that excludes no one. We are excited about JMLR, which provides this access and does so unconditionally. We feel that JMLR provides an ideal vehicle to support the near-term and long-term evolution of the field of machine learning and to serve as the flagship journal for the field.

That confidence seems to have been justified. JMLR is now up to its 18th volume, and is flourishing. It is “zero cost” open access — it makes no charge either to read or to be published if a paper is accepted by the editors. The last thing this minimalist operation needs is a rival title from a well-funded publisher able to pour money into its new launch in order to attract authors and take over the market. Hence the current boycott of Nature Machine Intelligence, and the call for “new zero-cost open access journals and conferences in artificial intelligence and machine learning” instead.

As to why Springer decided to announce a competitor to a well-established, and well-respected journal, an article in The Next Web points out that the German publishing company is about to offer shares worth up to €1.6 billion (around $1.95 billion) in its imminent IPO. A new journal covering the super-hot areas of AI, machine learning and robotics is just the sort of thing to help give the share price a boost. And when there’s serious money to be made, who cares about the collateral damage to a much-loved open access title running on a shoestring?

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

Filed Under: academics, knowledge, machine learning, nature, open access, research
Companies: nature, springer

Washington's Growing AI Anxiety

from the perhaps-AI-can-help-us-deal-with-AI dept

Most people don’t understand the nuances of artificial intelligence (AI), but at some level they comprehend that it’ll be big, transformative and cause disruptions across multiple sectors. And even if AI proliferation won’t lead to a robot uprising, Americans are worried about how AI and automation will affect their livelihoods.

Recognizing this anxiety, our policymakers have increasingly turned their attention to the subject. In the 115th Congress, there have already been more mentions of ?artificial intelligence? in proposed legislation and in the Congressional Record than ever before.

While not everyone agrees on how we should approach AI regulation, one approach that has gained considerable interest is augmenting the federal government’s expertise and capacity to tackle the issue. In particular, Sen. Brian Schatz has called for a new commission on AI; and Sen. Maria Cantwell has introduced legislation setting up a new committee within the Department of Commerce to study and report on the policy implications of AI.

This latter bill, the ?FUTURE of Artificial Intelligence Act? (S.2217/H.4625), sets forth a bipartisan proposal that seems to be gaining some traction. While the bill’s sponsors should be commended for taking a moderate approach in the face of growing populist anxiety, it’s not clear that the proposed advisory committee would be particularly effective at all it sets out to do.

One problem with the bill is how it sets the definition of AI as a regulatory subject. For most of us, it’s hard to articulate precisely what we mean when we talk about AI. The term ?AI? can describe a sophisticated program like Apple’s Siri, but it can also refer to Microsoft’s Clippy, or pretty much any kind of computer software.

It turns out that AI is a difficult thing to define, even for experts. Some even argue that it’s a meaningless buzzword. While this is a fine debate to have in the academy, prematurely enshrining a definition in a statute ? as this bill does ? is likely to be the basis for future policy (indeed, another recent bill offers a totally different definition). Down the road, this could lead to confusion and misapplication of AI regulations. This provision also seems unnecessary, since the committee is empowered to change the definition for its own use.

The committee’s stated goals are also overly-ambitious. In the course of a year and a half, it would set out to ?study and assess? over a dozen different technical issues, from economic investment, to worker displacement, to privacy, to government use and adoption of AI (although, notably, not defense or cyber issues). These are all important issues. However, the expertise required to adequately deal with these subjects is likely beyond the capabilities of 19 voting members of the committee, which includes only five academics. While the committee could theoretically choose to focus on a narrower set of topics in its final report, this structure is fundamentally not geared towards producing the sort of deep analysis that would advance the debate.

Instead of trying to address every AI-related policy issue with one entity, a better approach might be to build separate, specialized advisory committees based in different agencies. For instance, the Department of Justice might have a committee on using AI for risk assessment, the General Services Administration might have a committee on using AI to streamline government services and ITinfrastructure, and the Department of Labor might have a committee on worker displacement caused by AI and automation or on using AI in employment decisions. While this approach risks some duplicative work, it would also be much more likely to produce deep, focused analysis relevant to specific areas of oversight.

Of course, even the best public advisory committees have limitations, including politicization, resource constraints and compliance with the Federal Advisory Committee Act. However, not all advisory bodies have to be within (or funded by) government. Outside research groups, policy forums and advisory committees exist within the private sector and can operate beyond the limitations of government bureaucracy while still effectively informing policymakers. Particularly for those issues not directly tied to government use of AI, academic centers, philanthropies and other groupscould step in to fill this gap without any need for new public expenditures or enabling legislation.

If Sen. Cantwell’s advisory committee-focused proposal lacks robustness, Sen. Schatz’s call for creating a new ?independent federal commission? with a mission to ?ensure that AI is adopted in the best interests of the public? could go beyond the bounds of political possibility. To his credit, Sen. Schatz identifies real challenges with government use of AI, such as those posed by criminal justice applications, and in coordinating between different agencies. These are real issues that warrant thoughtful solutions. Nonetheless, the creation of a new agency for AI is likely to run into a great deal of pushback from industry groups and the political right (like similar proposals in the past), making it a difficult proposal to move forward.

Beyond creating a new commission or advisory committees, the challenge of federal expertise in AI could also be substantially addressed by reviving Congress’ Office of Technology Assessment (which I discuss in a recent paper with Kevin Kosar). Reviving OTA has a number of advantages: OTA ran effectively for years and still exists in statute, it isn’t a regulatory body, it is structurally bipartisan and it would have the capacity to produce deep-dive analysis in a technology-neutral manner. Indeed, there’s good reason to strengthen the First Branch first, since Congress is ultimately responsible for making the legal frameworks governing AI as well as overseeing government usage.

Lawmakers are right to characterize AI as a big deal. Indeed, there are trillions of dollars in potential economic benefits at stake. While the instincts to build expertise and understanding first make for a commendable approach, policymakers will need to do it the right way ? across multiple facets of government ? to successfully shape the future of AI without hindering its transformative potential.

Filed Under: ai, artificial intelligence, brian schatz, committees, machine learning, maria cantwell, regulation

Stupid Patent Of The Month: Will Patents Slow Artificial Intelligence?

from the that-would-be-unfortunate dept

We have written many times about why the patent system is a bad fit for software. Too often, the Patent Office reviews applications without ever looking at real world software and hands out broad, vague, or obvious patents on software concepts. These patents fuel patent trolling and waste. As machine learning and artificial intelligence become more commonplace, it is worth considering how these flaws in the patent system might impact advances in AI.

Some have worried about very broad patents being issued in the AI space. For example, Google has a patent on a common machine learning technique called dropout. This means that Google could insist that no one else use this technique until 2032. Meanwhile, Microsoft has a patent application with some very broad claims on active machine learning (the Patent Office recently issued a non-final rejection, though the application remains pending and Microsoft will have the opportunity to argue why it should still be granted a patent).Patents on fundamental machine learning techniques have the potential to fragment development and hold up advances in AI.

As a subset of software development, AI patents are likely to raise many of the same problems as software patents generally. For example, we’ve noted that many software patents take the form: apply well-known technique X in domain Y. For example, our Stupid Patent of the Month from January 2015 applied the years-old practice of remotely updating software to sports video games (the patent was later found invalid). Other patents have computers do incredibly simple things like counting votes or counting calories. We can expect the Patent Office to hand out similar patents on using machine learning techniques in obvious and expected ways.

Indeed, this has already happened. Take U.S. Patent No. 5,944,839, for a “system and method for automatically maintaining a computer system.” This patent includes very broad claims applying AI to diagnosing problems with computer systems. Claim 6 of this patent states:

A method of optimizing a computer system, the method comprising the steps of:

detecting a problem in the computer system;

activating an AI engine in response to the problem detection;

utilizing, by the AI engine, selected ones of a plurality of sensors to gather information about the computer system;

determining, by the AI engine, a likely solution to the problem from the gathered information; and

when a likely solution cannot be determined, saving a state of the computer system.

Other than the final step of saving the state of the computer where a solution cannot be found, this claim essentially covers using AI to diagnose computer problems. (The claim survived a challenge before the Patent Trial and Appeal Board, but the Federal Circuit recently ordered [PDF] that the Board reconsider whether prior art, in combination, rendered the claim obvious.)

A more recent patent raises similar concerns. U.S. Patent No. 9,760,834 (the ‘834 patent), owned by Hampton Creek, Inc., relates to using machine learning techniques to create models that can be used to analyze proteins. This patent is quite long, and its claims are also quite long (which makes it easier to avoid infringement because every claim limitation has to be met for there to be infringement). But the patent still reflects a worrying trend. In essence, Claim 1 of the patent amounts to ‘do machine learning on this particular type of application.’ Indeed, during the prosecution of the patent application, Hampton Creek argued [PDF] that prior art could be distinguished because it merely described applying machine learning to “assay data” rather than explicitly applying the techniques to protein fragments.

More specifically, the patent follows Claim 1 with a variety of subsequent claims that amount to ?When you’re doing that machine learning from Claim 1, use this particular well-known pre-existing machine learning algorithm.’ Indeed, in our opinion the patent reads like the table of contents of an intro to AI textbook. It covers using just about every standard machine learning technique you’d expect to learn in an intro to AI class?including linear and nonlinear regression, k-nearest neighbor, clustering, support vector machines, principal component analysis, feature selection using lasso or elastic net, Gaussian processes, and even decision trees?but applied to the specific example of proteins and data you can measure about them. Certainly, applying these techniques to proteins may be a worthwhile and time-consuming enterprise. But that does not mean it deserves a patent. A company should not get a multi-year monopoly on using well-known techniques in a particular domain where there was no reason to think the techniques couldn’t be used in that domain (even if they were the first to apply the techniques there). A patent like this doesn’t really bring any new technology to the table; it simply limits the areas in which an existing tool can be used. For this reason, we are declaring the ‘834 patent our latest Stupid Patent of the Month.

In fairness, the ‘834 patent is not as egregious as some of the other patents we have selected for this dubious ?honor.’ But we still think the patent is worth highlighting in this series because the problems similar patents could create for innovation and economic progress might be much more serious. Handing out patents on using well-known machine learning techniques but limited to a particular field merely encourages an arms race where everyone, even companies doing routine development, attempts to patent their work. The end result is a minefield of low-quality machine learning patents, each applying the entire field of machine learning to a niche sub-problem. Such an environment will fuel patent trolling and hurt startups that want to use machine learning as a small part of the larger novel technologies they want to bring to market.

We recently launched a major project monitoring advances in artificial intelligence and machine learning. As we pursue this project, we’ll also monitor patenting in AI and try to gauge its impact on progress.

Reposted from EFF’s Stupid Patent of the Month series.

Filed Under: ai, deep learning, machine learning, patent thicket, patents

Daily Deal: Machine Learning with Python Course and E-Book Bundle

from the good-deals-on-cool-stuff dept

Machine learning is a rapidly growing area of study and is becoming a large part of our everyday lives. The $49 Machine Learning with Python Course and E-Book Bundle takes you on a deep dive into machine learning with 4 e-books and 5 online courses. You’ll learn about TensorFlow, data mining, Python and much more.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Filed Under: daily deal, machine learning

Daily Deal: The Complete Machine Learning Bundle

from the good-deals-on-cool-stuff dept

Dive into the world of self-driving cars, speech recognition technology and more with the $39 Complete Machine Learning Bundle. Over 10 courses, you will learn about pattern recognition and prediction and how to harness the power of machine learning to take your programming to the next level. Discover quant trading, how to use Hadoop and MapReduce to tackle large data sets, how to create a sentiment analyzer with Twitter and Python, and much more.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Filed Under: daily deal, machine learning

from the promote-the-progress dept

As the march of progress of robotics and artificial intelligence continues on, it seems that questions of the effects of this progress will only increase in number and intensity. Some of these questions are very good. What effect will AI have on employment? What safeguards should be put in place to neuter AI and robotics and keep humankind the masters in this relationship? These are questions soon to break through the topsoil of science fiction and into the sunlight of reality and we should all be prepared with answers to them.

Other questions are less useful and, honestly, far easier to answer. One that continues to pop up every now and again is whether machines and AI that manage some simulacrum of creativity should be afforded copyright rights. It’s a question we’ve answered before, but which keeps being asked aloud with far too much sincerity.

This isn’t just an academic question. AI is already being used to generate works in music, journalism and gaming, and these works could in theory be deemed free of copyright because they are not created by a human author. This would mean they could be freely used and reused by anyone and that would be bad news for the companies selling them. Imagine you invest millions in a system that generates music for video games, only to find that music isn’t protected by law and can be used without payment by anyone in the world.

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

Let’s get the easy part out of the way: the culminating sentence in the quote above is not true. The creative spark is not the artistic output. Rather, the creative spark has always been known as the need to create in the first place. This isn’t a trivial quibble, either, as it factors into the simple but important reasoning for why AI and machines should certainly not receive copyright rights on their output.

That reasoning is the purpose of copyright law itself. Far too many see copyright as a reward system for those that create art rather than what it actually was meant to be: a boon to an artist to compensate for that artist to create more art for the benefit of the public as a whole. Artificial intelligence, however far progressed, desires only what it is programmed to desire. In whatever hierarchy of needs an AI might have, profit via copyright would factor either laughably low or not at all into its future actions. Future actions of the artist, conversely, are the only item on the agenda for copyright’s purpose. If receiving a copyright wouldn’t spur AI to create more art beneficial to the public, then copyright ought not to be granted.

To be fair to the Phys.org link above, it ultimately reaches the same conclusion.

The most sensible move seems to follow those countries that grant copyright to the person who made the AI’s operation possible, with the UK’s model looking like the most efficient. This will ensure companies keep investing in the technology, safe in the knowledge they will reap the benefits. What happens when we start seriously debating whether computers should be given the status and rights of people is a whole other story.

Except for two things. First, seriously debating the rights of computers compared with people is exactly what the post is doing by giving oxygen to the question of whether computers ought to get one of those rights in copyright benefits. Second, the EU’s method isn’t without flaw, either. Again, we’re talking about the purpose being the ultimate benefit to the public in the form of more artistic output, but the EU’s way of doing things divorces artistic creation from copyright. Instead, it awards copyright to the creator of the creator, which might spur more output of more AI creators, but how diverse of an artistic output is the public going to receive from an army of AI? We might be able to have a legitimate argument here, but there is a far simpler solution.

Machines don’t get copyright, nor do their creators. Art made by enslaved AI is art to be enjoyed by all.

Filed Under: ai, copyright, incentives, machine learning

Activists Cheer On EU's 'Right To An Explanation' For Algorithmic Decisions, But How Will It Work When There's Nothing To Explain?

from the not-so-easy dept

I saw a lot of excitement and happiness a week or so ago around some reports that the EU’s new General Data Protection Regulations (GDPR) might possibly include a “right to an explanation” for algorithmic decisions. It’s not clear if this is absolutely true, but it’s based on a reading of the agreed upon text of the GDPR, which is scheduled to go into effect in two years.

Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users. The law will also create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them.

Lots of people on Twitter seemed to be cheering this on. And, indeed, at first glance it sounds like a decent idea. As we’ve just discussed recently, there has been a growing awareness of the power and faith placed in algorithms to make important decisions, and sometimes those algorithms are dangerously biased in ways that can have real consequences. Given that, it seems like a good idea to have a right to find out the details of why an algorithm decided the way it did.

But it also could get rather tricky and problematic. One of the promises of machine learning and artificial intelligence these days is the fact that we no longer fully understand why algorithms are deciding things the way they do. While it applies to lots of different areas of AI and machine learning, you can see it in the way that AlphaGo beat Lee Sedol in Go earlier this year. It made decisions that seemed to make no sense at all, but worked out in the end. The more machine learning “learns” the less possible it is for people to directly understand why it’s making those decisions. And while that may be scary to some, it’s also how the technology advances.

So, yes, there are lots of concerns about algorithmic decision making — especially when it can have a huge impact on people’s lives, but a strict “right to an explanation” seems like it may actually create limits on machine learning and AI in Europe — potentially hamstringing projects by requiring them to be limited to levels of human understanding. The full paper on this does more or less admit this possibility, but suggests that it’s okay in the long run, because the transparency aspect will be more important.

There is of course a tradeoff between the representational capacity of a model and its interpretability, ranging from linear models (which can only represent simple relationships but are easy to interpret) to nonparametric methods like support vector machines and Gaussian processes (which can represent a rich class of functions but are hard to interpret). Ensemble methods like random forests pose a particular challenge, as predictions result from an aggregation or averaging procedure. Neural networks, especially with the rise of deep learning, pose perhaps the biggest challenge?what hope is there of explaining the weights learned in a multilayer neural net with a complex architecture?

In the end though, the authors think these challenges can be overcome.

While the GDPR presents a number of problems for current applications in machine learning they are, we believe, good problems to have. The challenges described in this paper emphasize the importance of work that ensures that algorithms are not merely efficient, but transparent and fair.

I do think greater transparency is good, but I worry about rules that might hold back useful innovations as well. Prescribing exactly how machine learning and AI needs to work too early in the process may be a problem as well. I don’t think there are necessarily easy answers here — in fact, this is definitely a thorny problem — so it will be interesting to see how this plays out in practice once the GDPR goes into effect.

Filed Under: ai, algorithms, eu, gdpr, machine learning, right to an explanation

Chatbot Helps Drivers Appeal Over $4 Million In Bogus Parking Tickets

from the the-revolution-will-be-automated dept

In what is likely a sign of the coming government-rent-seeking apocalypse, a 19-year-old Stanford student from the UK has created a bot that assists users in challenging parking tickets. The inevitable result of parking nearly anywhere can now be handled with something other than a) meekly paying the fine or b) throwing them away until a bench warrant is issued.

While a variety of bots have been created to handle a variety of tasks, very few have handled them quite as well as Joshua Browder’s “robot lawyer” — which is certain to draw some attention from disgruntled government agencies who are seeing this revenue stream drying up.

In the 21 months since the free service was launched in London and now New York, Browder says DoNotPay has taken on 250,000 cases and won 160,000, giving it a success rate of 64% appealing over $4m of parking tickets.

Fighting parking tickets is a good place to start, considering most people aren’t looking to retain representation when faced with questionable tickets. The route to a successful challenge isn’t always straightforward, so it’s obviously beneficial to have some guidance in this area — especially guidance that can determine from a set of pre-generated questions where the flaw in the issued ticket might lie.

Anyone looking for an expansion of chatbots into trickier areas of criminal law are probably going to need to rein in their enthusiasm. There’s not much at stake individually in challenging a traffic ticket. The 36% who haven’t seen a successful appeal are no worse off than they were in the first place. But it’s still better than simply assuming that paying the fine is the only option, especially when the ticket appears to be bogus.

Browder has plans for similar bot-based legal guidance in the future.

Browder’s next challenge for the AI lawyer is helping people with flight delay compensation, as well as helping the HIV positive understand their rights and acting as a guide for refugees navigating foreign legal systems.

The fight against airlines should prove interesting. Generally speaking, most airlines aren’t willing to exchange their money for people’s time, especially when the situation creating the delay is out of their hands — which seems to be every situation, whether it’s a snowstorm or a passenger confusing math with terrorism. But, if it proves as successful as Browder’s first AI assistant, more grumbling from those whose business model has just been interfered with is on the way.

Filed Under: ai, chatbot, joshua browder, machine learning, parking tickets, robot lawyer