technology – Techdirt (original) (raw)
Instead Of Fearing & Banning Tech, Why Aren’t We Teaching Kids How To Use It Properly?
from the luddism-isn't-helping dept
I recognize why some parents are worried about screen time and the use of technology in the classroom. But isn’t the better idea to teach kids how to use it properly, rather than banning it altogether?
Lately, there have been a bunch of stories about banning mobile phones in schools. Both California and New York have been pushing to make it mandatory. Lots of people are ignoring that (1) this has been tried and failed, including in New York before, and (2) researchers studying various places where it has been implemented have found that the bans aren’t very much useful.
This is not to say that phones belong in schools. There are plenty of reasons why schools or teachers might decide that phones need to be out of kids’ hands during class time. But blanket mandatory bans just seem like overkill and prone to problematic enforcement.
Even worse, the push to ban phones is already morphing into other kinds of technological bans. The Wall Street Journal is reporting on new efforts to ban other kinds of technology, including Chromebooks or other kinds of laptops.
Cellphone bans are taking effect in big districts across the country, including Los Angeles and Las Vegas. The next logical question, at least for some, is: What about the other screens? These concerned parents argue that the Covid-era shift that put Chromebooks and tablets in more students’ hands is fueling distraction more than learning.
I know that my kids use Chromebooks as part of their schooling, and they are pretty useful tools. Yes, there is a concern about kids spending too much time staring at screens, but the idea that banning these devices entirely in schools seems backwards.
Once kids graduate, they’re going to need to use computers or other devices in a very large number of jobs out there. We’re doing our kids an incredible disservice in thinking that the way to train them for the modern world is to ban the tools of the modern world from their instruction.
No one is saying to just let them go crazy on these devices, but, at the very least, it’s important to train them in the proper use of these modern technologies, which includes how and when to put them down and do something else.
Otherwise, we’re guaranteeing that kids will graduate and have to take jobs where they have to use computers and other devices where they simply haven’t been trained to use them properly. This means that all the things that parents now seem afraid will happen will instead happen on the job.
That doesn’t seem smart.
All of this is beginning to feel quite like the freak-out parents had about calculators four decades ago. Teaching kids how to use modern technology well should be a job for schools and educators. It seems like a real disservice to kids and their future for politicians and parents to step in and try to stick everyone’s head in the sand and make sure that no kids are prepared for the modern world.
The article is full of parents opting their kids out of any technologies, which again seems unlikely to be healthy for those kids either. It notes that schools are struggling with parents demanding that all technology be taken out of class, noting that plenty of teaching tools today involve technology.
As it should.
Avoiding technology entirely for kids until they graduate seems like a recipe for disaster. They’re going to be dropped into a world where technology is a necessity, and they will not have any sense of how to use it, let alone use it properly.
There’s a way for schools to teach kids how to properly use technology, and it isn’t by telling them it is bad and must be banned.
Filed Under: chromebooks, education, mobile phone bans, mobile phones, schools, technology
NY’s ‘Right To Repair’ Law Was Neutered By Lobbyists And Governor Hochul After Passage. Now, Some Lawmakers Are Trying To Fix It.
from the fixing-your-supposed-fix dept
Wed, Feb 7th 2024 03:20pm - Karl Bode
In late 2022, the state of New York finally passed new right to repair legislation after years of activist pressure. The bill, which went live last month, gives New York consumers the right to fix their electronic devices themselves or have them more easily repaired by an independent repair shop, instead of being forced to only obtain repairs through costly manufacturer repair programs.
The problem: after it was passed, lobbyists convinced New York Governor Kathy Hochul to water the bill down almost to the point of uselessness. As a result, the bill doesn’t actually cover many of the sectors where annoying repair monopolization efforts are the worst, including cars, medical devices, agricultural hardware, E-bikes, home alarm systems, or power tools.
The law also only covers any tech product sold in New York on or after July 1, 2023. Additional restrictions, added by industry and Hochul at the last second, force consumers to buy entire “repair assemblages” instead of individual parts. Hochul didn’t really bother to give a useful explanation as to why she lobotomized the law in such a fashion, but the action generally speaks for itself.
But New York Assemblymember Patricia Fahy has introduced Assembly Bill 8955, which aims to restore the bill to look something like the version that voters actually approved:
“The bill that was passed by the legislature bears very little resemblance to the statute as signed – leaving many loopholes that have now been addressed in A8955,” she said.
A8955 would backdate the product coverage start date to July 1, 2021, expand the definition of an OEM, eliminate the exclusion of repair assemblages and “parts pairing,” and eliminate some of the product exclusions (you can see all modified changes here). Fahy seems to think the updated version of the bill has a good chance of passing given the popularity of right to repair reforms. We’ll see.
While companies like Apple have nabbed headlines for “doing a 180 on right to repair,” that often hasn’t actually been the case. Companies like Apple remain very active when it comes to whittling down or rewording reforms so they’re sometimes effectively useless via proxy policy orgs like Technet.
Despite this, right to repair reform continues to see widespread, bipartisan public support. All told, Massachusetts, Colorado, New York, Minnesota, Maine and California have all passed some flavor of right to repair legislation, and the momentum shows no sign of slowing down, even if industry has had some notable success ensuring these laws aren’t quite living up to their full potential.
Filed Under: consumer rights, governor kathy hochul, new york, oem, parts pairing, patricia fahy, repair, right to repair, technology
More Details On How Tech Lobbyists Lobotomized NY’s Right To Repair Law With Governor Kathy Hochul’s Help
from the this-is-why-we-can't-have-nice-things dept
Fri, Feb 17th 2023 05:27am - Karl Bode
The good news: last December New York State finally passed a landmark “right to repair” bill providing American consumers some additional protection from repair monopolies. The bad news: before the bill was passed, corporate lobbyists worked with New York State Governor Kathy Hochul to covertly water the bill down almost to the point of meaninglessness.
Grist received documentation showing how Hochul specifically watered the bill down before passage to please technology giants after a wave of last-minute lobbying:
Draft versions of the bill, letters, and email correspondences shared with Grist by the repair advocacy organization Repair.org reveal that many of the changes Hochul made to the Digital Fair Repair Act are identical to those proposed by TechNet, a trade association that includes Apple, Google, Samsung, and HP among its members. Jake Egloff, the legislative director for Democratic New York state assembly member and bill sponsor Patricia Fahy, confirmed the authenticity of the emails and bill drafts shared with Grist.
The changes all directly reflect requests made by Apple, Google, Microsoft, IBM and other companies desperate to thwart the right to repair movement from culminating in genuinely beneficial legislation. All of these industry giants are keen on monopolizing repair to drive up revenues, but like to hide those motivations (and the resulting environmental harms) behind flimsy claims of consumer privacy and security.
Among their asks: numerous cumbersome intellectual property protections, as well as the elimination of a requirement that manufacturers provide device owners and independent repair providers with “documentation, tools, and parts” needed to access and reset digital locks that impede the diagnosis, maintenance or repair of covered electronic devices.
Additional restriction added by industry and Hochul at the last second force consumers to buy entire “repair assemblages” instead of being able to buy just the independent parts they need, which advocates say further undermines the law (imagine being forced to buy an entire computer motherboard when just a single component is broken).
The bill already failed to include vehicles, home appliances, farm equipment or medical devices — all sectors rife with obnoxious attempts to monopolize repair via DRM or by making diagnostics either expensive or impossible. Between that and these last minute changes the bill is more ceremonial than productive, and yet another clear example of how normalized U.S. corruption cripples meaningful reform.
Filed Under: hardware, kathy hochul, legislation, new york, repair monopolies, right to repair, technology
Companies: technet
John Deere Once Again Pinky Swears It Will Stop Monopolizing Repair
from the fool-me-once dept
Tue, Jan 10th 2023 05:37am - Karl Bode
Once just the concern of pissed off farmers and nerdy tinkerers, the last two years have seen a groundswell of broader culture awareness about “right to repair,” and the perils of letting companies like Apple, John Deere, Microsoft, or Sony monopolize repair options, making repairing things you own both more difficult and way more expensive.
John Deere’s draconian repair restrictions on agricultural equipment (and the steady consolidation and reduction in repair options) results in customers having to pay an arm and a leg for service, or drive hundreds of additional, costly miles to get their tractors repaired.
Like Apple and other bigger companies attempting to monopolize repair, John Deere keeps promising that things will soon be different. Like last week, when Deere struck a “memorandum of understanding” with the American Farm Bureau Federation promising that the company will make sure farmers have the right to repair their own farm equipment or go to an independent technician:
Dave Gilmore, Deere’s vice president of ag and turf marketing, said the company looks forward to working with the farm group and “our customers in the months and years ahead to ensure farmers continue to have the tools and resources to diagnose, maintain and repair their equipment.”
There are a few problems. One, this memorandum of understanding isn’t really binding. It’s part of a self-regulatory system the agricultural industry has constructed to pre-empt actual regulation and accountability. Farm Bureau officials will meet occasionally with Deere to try and work out solutions to “right to repair” issues, but there’s no meaningful enforcement or accountability mechanism here.
The MOU also does something I’d wager was a major reason for the agreement; it requires that the American Farm Bureau Federation avoid supporting any looming right to repair legislation:
AFBF agrees to encourage state Farm Bureau organizations to recognize the commitments made in this MOU and refrain from introducing, promoting, or supporting federal or state “Right to Repair” legislation that imposes obligations beyond the commitments in this MOU.
Companies that have constructed lucrative but harmful repair monopolies desperately want to thwart the growing push for right to repair legislation. And they’ve had significant success in not only killing many such laws before they can be passed, but watering down any bills that do manage to survive as we just saw in New York State.
The other problem is that Deere has made similar promises before.
In late 2018, John Deere and a coalition of other agricultural hardware vendors promised (in a “statement of principles“) that by January 1, 2021, Deere and other companies would make repair tools, software, and diagnostics readily available to the masses. In short, they managed to stall right to repair laws in several states in exchange for doing the right thing.
That didn’t happen. And there’s no reason to think it will start happening now. What John Deere (like Apple and every other company monopolizing repair) wants is to do just enough to convince federal lawmakers to back off of new laws and any enforcement with actual teeth. That fairly consistency doesn’t actually result in reform, it results in theatrics.
Filed Under: hardware, independent repair, right to repair, technology, tractors
Companies: american farm bureau, john deere
New York Becomes The First State To Pass A ‘Right To Repair’ Law
from the pass-go,-collect-your-$200 dept
Tue, Jun 7th 2022 06:19am - Karl Bode
New York State has become the first state in the country to pass “right to repair” legislation taking direct aim at repair monopolies. The bill itself mandates that hardware manufacturers make diagnostic and repair information available to consumers and independent repair shops at “fair and reasonable terms.”
The bill notably doesn’t include vehicles, home appliances, farm equipment or medical devices — all sectors rife with obnoxious attempts to monopolize repair via DRM or by making diagnostics either expensive or impossible. But right to repair advocates like iFixit’s Kyle Wiens say they are hopeful they can include such technologies in additional NY state bills down the road:
“There will still be a long way to go before we’ve legally secured a Right to Repair for every thing, across the whole world. Many other states are considering bills of their own, and we’ve still got appliances, tractors, and medical devices on our dream docket.
Nevertheless, this victory is the biggest the Right to Repair movement has seen so far.
For the last year or two we were in a race to see which state would pass right to repair legislation first. California seemed poised to take this mantle last week, but lobbyists scuttled the effort at the last minute, falsely claiming that the bill would harm consumer privacy and security. A federal bill has been considered, but, there too, cross-sector lobbying has stalled progress.
There’s a long list of large corporations that are working overtime to cement repair monopolies either through DRM or draconian restrictions on access to tools, diagnostic systems, parts, or device manuals. This ham-fisted behavior, most notably by companies like Apple or John Deere, has resulted in a massive, bipartisan, grassroots movement that has slowly jumped from the nerdy fringe to the mainstream.
While a scattered number of companies have made improvements to try and pre-empt such legislation, many others have only doubled down on the behavior. Worse, they’ve taken to bandying about all manner of false claims about how dismantling their repair monopolies would harm consumer privacy and safety, help sexual predators, or turn states into diabolical meccas for hackers.
But those efforts have proven to be a hard sell, and New York is likely the first of several states responding with common sense and extremely popular legislation. It’s been a grim stretch for consumer rights in the U.S. (especially in telecom), though watching the right repair movement galvanize and slowly enact meaningful change has proven to be a refreshing exception.
Filed Under: diagnostics, drm, hardware, medical, monopolies, new york, repair, right to repair, technology
Policy Building Blocks, And How We Talk About The Law
from the start-here dept
One of the fundamental difficulties in doing policy advocacy, including, and perhaps especially tech policy advocacy, is that we are not only speaking of technology, which can often seem inscrutable and scary to non-experts, but law, which itself is an intricate and often opaque system. This complicated nature of our legal system can present challenges, because policy involves an application of law to technology, and we can’t apply it well when we don’t understand how the law works. (It’s also hard to do well when we don’t understand how the technology works, either, but this post is about the law part so we’ll leave the issues with understanding technology aside for now.)
Even among lawyers, who should have some expertise in understanding the law, people can find themselves at different points along the learning curve in terms of understanding the intricacies and basic mechanics of our legal system. As explained before, law is often so complex that, even as practitioners, lawyers tend to become very specialized and may lose touch with some basic concepts if they do not often encounter them in the course of their careers.
Meanwhile it shouldn’t just be lawyers who understand law anyway. Certainly policymakers, charged with making the law, should have a solid understanding what they are working with. But regular people should too. After all, the point of a democracy is that the people get to decide what their laws should be (or at least be able to charge their representatives to make good ones on their behalf). And people can’t make good choices when they don’t understand how the choices they make fit into the system they are being made for.
Remember that none of these choices are being made in a vacuum; we do not find ourselves today with a completely blank canvas. Instead, we’ve all inherited a legal system that has chugged along for two centuries. We can, of course, choose to change any of it should we so require, but such an exercise would be best served by having a solid grasp on just what it is that we would be changing. Only with that insight can we be sure that any changes we might make would be needed, appropriate, and not themselves likely to cause even more problems than whatever we were trying to fix.
Because while our legal system is sometimes clunky and cumbersome, full of paradoxes, sometimes irreconcilable tensions, and lots of interdependencies, and is sometimes built upon perhaps naïve assumptions about what would best serve the nation… at the same time it’s really not a bad system and overall it has served us well. Even some of its more clunky bits tend to exist for good reasons, which have not necessarily been obviated by our changing nation and world. By and large, American legal constructs still continue to function as strong foundational pillars upon which to predicate a nation dedicated to the rule of law, liberty, and justice for all. So before we pull any parts of our legal system down, or think we need to reinvent the entire legal wheel, we should instead first understand the tools our system has already given us, because we may be able to solve a lot of the problems we might wish to solve just by using them better.
Thus we are left with a situation where effective advocacy depends on effective education, even more perhaps than effective argument. We can scream and yell at each other all day about what sorts of policy we might prefer, and that’s fine, but it can only be productive so long as we are all starting on the same page, with solid grasp of where the law is now and what we have to work with. Otherwise we will just be wasting time, because there’s no point arguing in support of a policy predicated on misapprehensions about how our legal world works: no matter how well-intentioned your proposed policy, you will still never be able to create the better world you were hoping to create with it because it simply won’t be something reality can accommodate.
For policy discourse to be able to produce that better world, we need to be equipped to feed the discourse developing it more effectively. So towards that end, look out here for pieces that don’t just talk about a specific legal case or piece of legislation, but instead take the time to focus on some of the broader legal concepts and constructs that any analysis of such cases or legislation should necessarily involve. Some pieces may be quite broad, to show general concepts or map out the landscape of a particular area of law, while some may zoom in on specific legal concepts and doctrines, particularly if a policy issue has arisen that pushes a certain topic to the fore. And while they will probably be written in whatever order the mood or opportunity strikes, and at whatever pace is practical, the goal is to ultimately paint as full a constellation of policy building blocks as possible. These are the puzzle pieces we put together to form any wider policy, and it is important for everyone trying to form that policy to be able to use that vocabulary accurately and well so that, rather than demanding the impossible, we can instead all collectively endeavor to make policy that can actually serve us well.
Filed Under: law, policy, policy building blocks, technology, technology policy
Digital Democracies: How Liberal Governments Can Adapt In The Technological Age
from the embrace-the-technology dept
At the turn of the last millennium, there was a wave of optimism surrounding new technologies and the empowerment of the modern digital citizen. A decade later, protestors across North Africa and the Middle East leveraged platforms such as Facebook and Twitter to bring down authoritarian regimes during the revolutions of the Arab Spring and it was believed these technologies would bring about a new flourishing of the worldwide liberal democratic order.
Unfortunately, the emancipatory potential of the open internet has been undermined by the latest in a long line of authoritarian regimes hijacking the technology. Autocracies adapted, with China leading the way. Towards the end of the 20th century, the CCP introduced a new political-economic model revolving around centralized rule and a controlled market economy. With this new model, China has successfully broken common political-economic orthodoxy by limiting domestic desire for democracy while maintaining a sizable middle class. A key driver of this is a “comprehensive system of state repression, bolstered by the latest digital technologies.”
China has turned applications of key technologies such as artificial intelligence, facial recognition, and data analytics to usher in a new age of digital dictatorships which can spy on its citizens, predict dissent, and censure unwanted information, increasing regime resiliency through a virtually manufactured world safe for autocracy. Many leaders of the developing world have taken advantage of this, as China has exported its model to other countries, such as Venezuela, to restrict the freedoms of the modern citizen.
As authoritarian governments increase regime resiliency by leveraging digital technologies, citizens of liberal democracies have trouble trusting these same technologies in no small part due to their exploitation by authoritarians and their agents at home and abroad. This understandable hesitancy may not only lead to a technology gap, but leave democratic institutions vulnerable to threats by these same technologies; one can simply look to Russia’s meddling attempts in recent U.S. elections. The pendulum has swung towards autocracy as digital technologies seem to have had an asymmetrical effect, bolstering authoritarian regimes as the tides turn against the previous post-Cold War wave of liberalism. Pessimists are looking backwards, opening that perhaps digital technologies are in tension with–or even pose a threat to–liberal democratic governments. This does not need to be the case.
As pointed out by Mike Masnick, when charting the future of the digital landscape, one should not automatically assume “progress towards a ‘good’ outcome is inevitable and easy,” but nor is the path towards a technological dystopia. Not all progress is good, but if we can’t move forward and can’t stay still, that leaves only one option. By rethinking how we apply these technologies, democracies around the world can find methods by which to both enhance the liberal democratic ideological values, and protect against weaknesses inherent to the political ideology, creating resilient “digital democracies” to stand against the rising autocratic tide and reinforce their own struggling institutions.
Transparency through Technology
The main method by which democracies can enhance liberal principles is by creating more transparent governments. Transparency can act to limit the powers of government, promoting freedom and individualism, while making policy and officials more effective and accountable, respectively. An informed, educated citizenry is the cornerstone of any robust liberal democracy, and there’s no reason the technologies of the Third Industrial Revolution shouldn’t be employed in service of this goal. To do this, digital technologies should be leveraged to better illuminate the wants and needs of citizens for more informed policy making, create reliable metrics so constituents can directly measure the effects of politician’s policy decisions, and generally increase the transparency of actions of government officials, law enforcement agencies, and lobbyists.
Though digital technologies make it possible for countless data points to be gathered on individual citizens, the potential for these processes goes beyond delivering targeted ads without the development of a CCP-style social credit score. Data analysis allows for the ability to create policy options that better fit citizen’s needs, for example customizability. If insurance companies allow customers to select policy options based on their needs, why can national governments not do the same based on a person’s location, age, family size, economic situation, etc? We already see this in tax policy, but with digital technologies, this can be applied more broadly. For example, the U.S. welfare state is notoriously difficult to navigate and runs on systems almost old enough to qualify for Social Security. An embrace of new developments could not only make programs more accessible to those who qualify, but more tailored to their specific needs. Policymaking can be reimagined to create options that incorporate different citizens’ wants and needs, and advances in digital technologies allow for a larger, faster, and more diverse analysis of the data required to design and implement these complex solutions.
One of the main benefits of artificial intelligence, machine learning, and “big data” is the ability to uncover causal directions between different variables. By using these technologies, it will be easier to understand the direct effects of policy decisions across a number of variables. This level of transparency would create for a more informed citizen as they could see how their representatives’ voting habits are directly affecting their bank accounts, the job market, their access to healthcare, etc. For example, governments could use the abundance of financial data to publish reports showcasing how specific tax laws actually affect different demographic groups, taking the guesswork out of evaluating policy decisions. The use of granular data to analyze mistakes in the designation of opportunity zones created by the Tax Cuts and Jobs Act is one example. Digital technologies could also allow for simulations of different policy proposals to see how they might affect an individual, city, or state. Digital twin technologies created by companies like Deloitte and Dassault are already being leveraged by governments to understand how certain decisions on energy and logistics will change how cities operate. Another possible use case of these technologies is policy trials that would allow governments to study the effects of lawmaking for a given geographic area and time period, similar to how businesses run trials on different marketing plans. A “low fidelity” version of this could be seen in 2013 when the Obama administration allowed Colorado and Washington to experiment with legalizing recreational marijuana.
While reasonable steps should be taken to anonymize such information, it’s possible to publish it in a manner that’s transparent and easily digestible by the press, policy analysts, and public officials to make it harder for pie-in-the-sky policy proposals to be introduced and adopted without just taking the word of their proponents.
Lastly, digital democracy can shed light on the action of government and adjacent officials. The idea is similar to that of China’s surveillance system, but in reverse. If China can use digital technologies to monitor citizens, thereby dissuading populations from making certain choices, why can citizens not do the same to governments? Indeed the adoption of a social credit system was a deliberate, top-down choice by the CCP–not the natural evolution of “big data.” Who’s to say that a liberal democracy can’t flip the script? By capturing and sharing instances of government corruption, police abuse, and lobbying malpractice, society’s officials will be dissuaded from making decisions against the public’s interests. Apart from Facebook, and Twitter, a number of specialized platforms are being deployed for that very purpose. Guatemalans have experimented with a social platform that allows users to share examples of police corruption. German made LobbyControl provides transparency on lobbying at the local and EU level. Working together, liberal democracies can share these platforms to create an international system of government accountability.
Defense through Digitization
Liberal democratic systems are not without their weaknesses. One such vulnerability is the slowness, as can be seen by America’s sedated and haphazard response to the COVID-19 pandemic. As they can be used to enhance tenets of liberal democracies, digital technologies can also protect against inherent weaknesses by accelerating government response in the face of crisis, preventing the spread of propaganda and polarization, and protecting citizen’s rights to freedom and privacy.
AI, ML, and big data can be leveraged in three key ways to help liberal democratic governments with crisis response. First before a crisis strikes, algorithms can analyze data to uncover vulnerabilities in a system before they take hold. This could have been useful in predicting the collapse of the housing market prior to the 2008 financial crisis. Once a crisis has struck, these technologies can ascertain the principal drivers of a crisis so resources can be deployed accordingly. Tools like this could have been useful during the ongoing hunger crisis in Burkina Faso, where the government may have chosen not to close key resources in the food supply chain had they realized that malnutrition has been a larger cause of death than COVID-19. Lastly, these tools can be used in the post-crisis period to understand which policies had the most beneficial impact, helping to prepare for future events, such as the next pandemic.
And again, openness with this information makes it possible for more parties to cry foul when something isn’t right. In the case of the 2008 financial crisis, there was a vocal minority sounding the alarm. Still, the talking heads and smartest guys in the room maintained a rosy view and were able to dismiss those critics as Cassandras. Open access by a larger swath of the public to warning signs from reliable sources makes it less likely that those who should know better can take a “nothing-to-see-here” line to be repeated by pundits, public intellectuals, and policymakers.
Sometimes weaknesses and vulnerabilities are driven by our own applications of technology and require course correction. In democracies across the globe, it seems as though the public is becoming more polarized. One driver of this is the ease of leveraging technology to sharpen divides in societies to subsequently weaponize public opinion. But it is not technology that is inherently at fault, but the applications of these algorithms to maximize views and profits. Studies show that a large percentage of citizens are less polarized than previously believed. Unfortunately, these moderates may choose to stay away from certain social platforms in order to avoid inflammatory media. But what if the algorithms were rewritten to prop up neutral voices rather than to spread inflammatory content? Moderates may be more willing to use these platforms, and populations would more readily see muted perspectives on an issue. By redesigning the algorithms we use to spread information, digital technologies may be able to turn the tide against tribalization, and subsequently polarization. There’s no neutral design, and the amplification of those who can bring more light than heat and turn down the temperature of online discourse more broadly deserve promotion.
As discussed in the previous section, recommendation systems operate on algorithms that we do not quite understand. These algorithms are not inherently undemocratic, but their applications can lead to unwanted side effects infringing on our freedoms and privacy. By understanding how to game these algorithmic recommendation engines, outside actors are able to create media which can influence perspectives and subsequently our decision-making process, in effect limiting our freedoms by breaking the integrity of our autonomy. What makes this even more dangerous is that we are seldom aware of this occurrence, as we scroll through videos, posts, and tweets on autopilot. A solution to this infringement on our online freedoms can be found by assessing these algorithms and redesigning them to serve the purposes we require.
Looking to privacy, a number of tools have been created that can provide noise to data, making it difficult for digital entities to uncover insights. As an example, a program on your computer could randomly jump to different websites during your downtime to prevent unwanted AI systems from making accurate recommendations based on your browsing history. Likewise, AI software can add similar noise to online pictures by changing a few pixels’ colors. One could apply this noise to their Facebook or Instagram posts to prevent facial recognition software from recognizing the images, while allowing friends and family to see the pictures largely unchanged. These sorts of systems could be used in places like China to confuse digital surveillance technology. One key note to remember: if and when “digital democracies” start to appear, it is important to not cross thresholds into the authoritarian regime — the goal is to increase resiliency without further infringing on our rights to freedom and privacy through digital technologies.
Increasing Resiliency through Trust
The examples above serve as a start to the discussion of means through which democracies can become resilient through the usage of digital technologies. But “digital democracies” are not inevitable, as western liberal society has a certain mistrust towards big tech companies who are vital in driving such a transformation. In order to evolve into “digital democracies”, three main societal changes must occur.
The first is an establishing of trust between big, Western tech conglomerates and governments. In the US, mistrust of companies like Google, Facebook, and Amazon have led some to call for a breaking up of these giants, but this is not the solution. Without these companies on our side, liberal democracies may not be able to keep up with the same advancements made by Eastern tech conglomerates such as Baidu, Alibaba, Tencent, and Xaiomi. Instead, governments must partner with tech firms in order to more clearly define rules and regulations, as well as responsibilities between the two groups. Without an open, non-overreaching dialogue, the situation will remain hostile, making it difficult to establish technological resiliency.
Second is the trust between governments and the general public, in regards to the usage of digital technologies. Partnerships between big tech and governments, as discussed above, may lead to greater issues, as can be seen by China’s use of big tech to create a state-controlled market and social economy. This partnership itself would also be antithetical to principles of liberalism by placing too much power in the hands of the government. Just as the media was once seen as a watchdog over governments and politicians, there must be an independent body that serves as a watchdog over governments and their use of tech. There are nonprofits, such as the Center for Humane Technology that serve to advocate for mission-focused tech development. Similar organization will be necessary to serve as a guardian between governments and the abuse of digital technologies. It is with the existence of independent bodies such as this that populations may begin to trust governments to use technologies to only further ideals of liberalism.
The last piece is to establish trust between big tech and the general public. This ties back to transparency. As stakeholders in our own data, people should have a say, or at minimum an understanding of how our information is used. But many of the new AI, ML, and data models utilized by big tech are often seen as “black boxes.” Our data goes in, and a result in the form of a product recommendation, news story, or social media post comes out, without a clear understanding of how the outcome was reached. By opening up algorithms and making them fair, accountable, and transparent, people would feel more comfortable by understanding how their data is truly acquired, assessed, and leveraged. This could be a key step in making digital technologies democratic — it would allow citizens to claim a stake in technology, just as big tech has claimed a stake in our data.
Trust between citizens and governments is a fundamental principle of liberalism and democracy, but in today’s ever-polarizing society, this can be hard to come by. The situation becomes even more complex when adding tech titans to the mix. Organizations exist to help establish this trust by guiding governments and big tech in more “humane” directions, but it will take cooperation by all stakeholders, along with NGO partners to increase outreach and communication for a more transparent relationship. This is the first step towards increasing resiliency of democracy, a necessary lever to swing the pendulum back to the people.
Ishpreet Singh is a recent engineering grad currently working as a strategy consultant.
Filed Under: democracy, governance, technology
Private Tech Companies Are Making Law Enforcement's Opacity Problem Even Worse
from the 'for-the-people,'-not-'despite-the-people' dept
The increasing reliance on tech by law enforcement means the increasing reliance on private companies. It’s inevitable that tech developments will be adopted by government agencies, but a lot of this adoption has occurred with minimal oversight or public input. That lack of public insight carries forward to criminal trials, where companies have successfully stepped in to prevent defendants from accessing information about evidence, citing concerns about exposed trade secrets or proprietary software. In other cases, prosecutors have dropped cases rather than risk discussing supposedly sensitive tech in open court.
Elizabeth Joh’s new article for Science says corporations are making existing transparency and accountability problems in law enforcement even worse.
Private companies are often wary of divulging too much about their products to gain competitive advantage over rivals. As a consequence, companies may decide to protect their intellectual property and market advantages by invoking trade secret privileges, demanding nondisclosure agreements with customers, and imposing other forms of property protections. These forms of commercial secrecy, common enough outside of the criminal justice system, pose challenges to basic police accountability.
This is only one of the problems the adoption of private sector tech creates. There are others. As Joh points out, law enforcement officers often attest to their “training and expertise” when testifying in court or seeking warrants. But actual tech expertise is the exception, not the rule. Private companies market products to law enforcement agencies, but the rollout of purchased tech is rarely accompanied by immersive training. In some cases, any analytic work is offloaded to private contractors, making it even less likely the public will ever be fully apprised of how the tech works or why — in the case of predictive policing software and facial recognition AI — it arrives at the conclusions it does.
Saddled by non-disclosure agreements, normal police secrecy, claims of valuable trade secrets, and a lack of technical expertise by law enforcement end users, private company tech can become a black hole where data on citizens goes in, but never comes back out for public scrutiny.
Take, for example, ShotSpotter. Its sensors and microphones pick up percussive noises. These are transferred to ShotSpotter’s analysts, who then make a judgment call about the overheard noises. Sometimes the analysts are wrong. Sometimes, more disturbingly, they alter their judgment calls after being contacted by police officers. How accurate is it? More importantly, how can the public access this data without relying on either ShotSpotter’s cheery claims about high accuracy or secondhand intuition based on agencies who have dumped the tech after too many false positives?
The real answer may never be known.
[A] community or researcher who wants to know more about ShotSpotter—its accuracy and its flaws—may find a dead end. ShotSpotter’s contractual arrangements with its police customers provide them with results, and results only. The company claims ownership not just of its proprietary software but also of the data its technology generates. This means that conventional tools of disclosure, like state public records requests laws, have no purchase on any acoustic gunfire detection system because such systems remain within private hands.
For better or worse, the solution likely runs through local governments. These entities can forbid law enforcement agencies from purchasing tech from vendors unwilling to allow public scrutiny of their software and hardware. They can demand more transparency on usage and effectiveness from the agencies they oversee. The general public may not be able to take its law enforcement business elsewhere, but their elected reps can help ensure the agencies they’re stuck with are more accountable.
The other check against misuse is the nation’s courts. Judges can (and should) challenge more broad statements about law enforcement expertise and allow fewer companies whose tech has generated evidence to shirk their proxy obligations to criminal defendants, who have a constitutional right to examine the evidence against them and confront their accusers in court — even if the accuser is a shot spotting sensor or a proprietary DNA-matching algorithm.
No one’s saying cops shouldn’t have access to tech advances. But governments need to do more to ensure these business relationships don’t supersede law enforcement agencies’ obligations to the public. Action needs to be taken now — both at the local and national level — to prevent ongoing problems from getting worse and to head off future abuses and injustices before they can occur.
Filed Under: evidence, law enforcement, technology, transparency
A Guy Walks Into A Bra
from the women-in-technology dept
A recent and surprisingly unpleasant professional encounter found me thinking again about an experience I had in the late 90s during my earlier career as a web developer before I went to law school. I’d gotten involved with a group that put on monthly meetings on topics of interest to the local community of Internet professionals. After the meetings a bunch of us would typically go out for dinner to chat and catch up. I did know some women from the organization, but I think most of the time the friends I went out with afterwards were men. It has never really bothered me to be in situations where I am outnumbered by men, so long as I’m treated with the respect of an equal. And I had no quarrel with my male friends on that front. But that evening drove home a reason why it was not good for women not to be better represented in technology in general.
Out at dinner we began “talking shop” almost immediately, discussing, in those early days of the Web, the importance of e-commerce to businesses and what sort of web presences companies needed to have in order to be able to profit from the Internet. We started listing stories of successes and failures, but the conversation ground to a halt once I offered my example:
“I have a bra I really like, and I’d like to buy another, but I can’t seem to find a web site for the brand that would allow me to order one.”
(Men, I am assuming that you will keep reading the rest of this post, so that I can make my point. But based on my friends’ reaction I wouldn’t be surprised if you’ve already slammed down the lid of your laptop, or tossed aside your phone, and run away. In which case, if that’s your inclination, it’s even more important that you keep reading.)
The example I raised was a perfectly reasonable one. I was sharing an example of a significant e-commerce opportunity being left untapped for no good reason. The essential facts were indisputable: many women wear bras, bras don’t last forever, women would probably like to replace their worn-out bras with ones they know they like, and women will pay money to a bra manufacturer to get the bra they want. Therefore, any bra manufacturer not using the Internet to facilitate this purchase was leaving money on the table.
The same would be true for plenty of other goods as well, and I’m sure if I’d swapped the word “women” for “men” and instead listed a product specific to the latter my friends would have readily agreed that it needed to be sold online. After all, at least one of them had an MBA, and they were some of the biggest Internet commerce evangelists I knew. But that the product example was something specific to women’s bodies completely shut them down. They practically squirmed out of their seats, desperate for the subject to be changed.
It was an uncomfortable moment for me, too, realizing that an ordinary reality of the female existence could be so unwelcome in a professional conversation. Was it too immodest to discuss undergarments with work colleagues? In an era when Viagra commercials were already running on broadcast television it would hardly seem so. If there was no compunction against discussing the commercialization of such intimate matters for men, why could that same clinical detachment not be afforded to similar topics important to women? After all, this wasn’t second grade; no one was going to catch cooties talking about a specific form of underwear common to many women. The bottom line is that women are people and peers and professionals and deserve not to be regarded with the adolescent squeamishness that all too often keeps us apart from the world.
And as far as our discussion was concerned, the subject of selling bras online was a perfectly salient example to cite, just as any male-specific product would have been. In fact, from the larger perspective of e-commerce, it had to get raised by someone. But it seems to take someone with experience with these ideas to bring them to the fore, which means that without having women involved in the decision making they are going to be forever overlooked by the men in charge, who all too easily can regard such topics as icky and esoteric, or outright ignorable, rather than worthwhile business problems to solve.
In the twenty-odd years since that dinner bra manufacturers did eventually discover the web. Yet two decades later, we are still talking about women in technology ? including the relative lack thereof. And it’s an absence that hasn’t stopped mattering.
I’ve never been one who wanted to believe it might matter. As I said earlier, I’ve never generally been bothered by being one of the few or only women in a situation, because I didn’t think it should matter. To me, true equality means that men and women should essentially be interchangeable, with all of us passing through life based on our merit as people. And I’ve always worried that if we focused too much on gender issues it might overly dwell on our differences, end up being divisive, and thus keep us from ever getting there.
But the reality is that we aren’t there, at least not yet. While there are lots of women in technology, albeit more in some sectors than others, we’re not a point where we exist in numbers on par with our male counterparts. And as with any other demographic where inclusion doesn’t come easily or equivalently, that lack of representation has consequences.
First, as the bra example illustrates, it leaves out of the technology conversation the insights and additions that women can bring. Although in every way that matters women are equal to men, the reality is that there can be some differences in our physical construction and, moreover, in our lived experiences. These differences shape our perspectives, awareness of issues others might overlook, and perhaps also our acuities. As a result, as with all people from the diverse fabric of humanity, they give us something extra to contribute that is valuable, and that should be valued.
But also, sometimes it is our absence itself that is what makes our lives different, and not in a good way. Because when women are not at the table it teaches everyone that women do not belong at the table. Which makes it really hard to then come along as a woman and try to sit at the table and be treated as the equal that we are.
About a year before the dinner described above I had a different job developing websites at a start-up. It was not a great job for a number of reasons, including that my boss didn’t actually know how to make websites. So he tended to give me instructions that were, at best, infeasible. One day I explained that we couldn’t do what he asked because we had to use the web-safe color palette or else the page would not render well. Back then, limitations in computer monitor technology meant that web sites were effectively limited to 216 colors in order to render predictably, and I was correct to point out the need to adhere to this common web design practice. But I was a woman dropping this knowledge on a man. He didn’t believe it until he looked across at my male colleague who confirmed it.
It was such a stark wake-up call that it didn’t necessarily matter how good I was at my job. For some men I would never be good enough simply because I wasn’t one of them. And it is among those sorts of attitudes that I am supposed to somehow carve out my career.
On the other hand, ever watch re-runs from earlier decades? In many important ways things are significantly better for women than they used to be. Including that there are plenty of men who welcome us as full equals at the table. But that doesn’t mean that things are totally fine ? in fact, far from it. Many challenges remain, and one of those challenges is implicit (and sometimes explicit) sex bias, which, even if it only comes up in a minority of situations, still ends up being an issue in quite a few situations. And part of why we need to contend with it is because it can be subtle. While it should hopefully be clear to everyone by now that no one should ever have to deal with the sort of verbal and physical harassment that prompted the #metoo movement, too many men seem to think that simply not outright abusing their female colleagues somehow absolves them of being sexist. But that’s hardly the benchmark.
Instead, as that recent unpleasant experience reminded me, there are other questions that need to be asked. Such as: are women as welcome to contribute to the best of our capacity as our male counterparts are? Or is our presence just merely tolerated because at this point it might have to be? When we speak, are we heard like our male colleagues are heard? Or are we tuned out like my friends did to me when I shared a perspective they didn’t want to hear or, worse, like my former boss did when I tried to speak with authority and expertise?
Obviously no woman is going to be right on everything, just as no man would be. We’re not even going to always agree among ourselves. But if we’re not generally regarded as having equivalent ethos as an equally-positioned man, and therefore denied the opportunities to be in an equal position, then that’s a problem. It’s a problem for women, it’s even a problem for men, and it’s a problem for any industry that drives our contributions away.
Filed Under: harassment, men, sexism, technology, women
Documents Show NYPD Has A Secret Surveillance Tech Slush Fund
from the pre-approved-dark-funding dept
About a half-decade ago, public records requesters discovered the Chicago Police Department had been spending seized funds on surveillance equipment like Stingray devices. The forfeiture fund was apparently completely discretionary and the PD used this steady supply of cash to make purchases not specifically approved by the city. It also allowed the department to elude direct oversight of surveillance activities and ensure the public was unable to interrupt the procurement process with pesky comments and questions.
It appears the New York Police Department has been doing the same thing for at least as long. But it’s not doing it with “discretionary” funds lifted from New York residents using civil forfeiture. Documents obtained by Wired show the infamously secretive agency has even more secrecy up its sleeves — a fund that is specifically exempt from its own oversight.
New York City police bought a range of surveillance tools—including facial-recognition software, predictive policing software, vans equipped with x-ray machines to detect weapons, and “stingray” cell site simulators—with no public oversight, according to documents released Tuesday.
In all, the documents show that the NYPD spent at least $159 million since 2007 through a little-known “Special Expenses Fund” that did not require approval by the city council or other municipal officials. The documents were made public by two civil rights groups, the Legal Aid Society and the Surveillance Technology Oversight Project (STOP), which say the practice amounted to a “surveillance slush fund.”
Millions of dollars went to Idemia Solutions, a facial recognition tech provider. Hundreds of thousands went to an Israeli defense contractor, which has provided some sort of “devices” to the PD (details on the devices are redacted). Three-quarters of a million went to a mobile x-ray van manufacturer. The list continues, encompassing a cell site simulator provider and other surveillance tech/software contractors whose documents have been redacted into near-uselessness.
Unfortunately, it appears the city gave its explicit blessing to being cut out of the approval process. A memorandum of understanding between the NYPD and the city’s Office of Management and Budget allows the NYPD to withhold contracts and other information dealing with tech/tools used in “confidential operations.” So, the city is completely complicit here, which differentiates this from the situation in Chicago. In New York, taxpayers are (or rather, aren’t) seeing their tax dollars spent on secret tech from a fund no one is allowed to oversee.
Combining secret tech with zero accountability is only the NYPD’s idea of a good time. Hopefully this national exposure will prompt the city to shred its memorandum of understanding and start over with some accountability measures in place.
Filed Under: discretionary funds, nypd, police, slush fund, surveillance, technology, transparency