coding – Techdirt (original) (raw)

Move Over, Software Developers – In The Name Of Cybersecurity, The Government Wants To Drive

from the unconstitutional-camel-noses dept

Earlier this year the White House put out a document articulating a National Cybersecurity Strategy. It articulates five “pillars,” or high-level focus areas where the government should concentrate its efforts to strengthen the nation’s resilience and defense against cyberattacks: (1) Defend Critical Infrastructure, (2) Disrupt and Dismantle Threat Actors, (3) Shape Market Forces to Drive Security and Resilience, (4) Invest in a Resilient Future, and (5) Forge International Partnerships to Pursue Shared Goals. Each pillar also includes several sub-priorities and objectives as well.

It is a seminal document, and one that has and will continue to spawn much discussion. For the most part what it calls for is too high level to be particularly controversial. It may even be too high level to be all that useful, although there can be value in distilling into words any sort of policy priorities. After all, even if what the government calls for may seem obvious (like “defending critical infrastructure,” which of course we’d all expect it do), going to the trouble to actually articulate it as a policy priority provides a roadmap for more constructive efforts to follow and may also help to martial resources, plus it can help ensure that any more tangible policy efforts the government is inclined to directly engage in are not at cross-purposes with what the government wants to accomplish overall.

Which is important because what the rest of this post discusses is how the strategy document itself reveals that there may already be some incoherence among the government’s policy priorities. In particular, it lists as one of the sub-priorities an objective with troubling implications: imposing liability on software developers. This priority is described in a few paragraphs in the section entitled, “Strategic Objective 3.3: Shift Liability for Insecure Software Products and Services,” but the essence is mostly captured in this one:

The Administration will work with Congress and the private sector to develop legislation establishing liability for software products and services. Any such legislation should prevent manufacturers and software publishers with market power from fully disclaiming liability by contract, and establish higher standards of care for software in specific high-risk scenarios. To begin to shape standards of care for secure software development, the Administration will drive the development of an adaptable safe harbor framework to shield from liability companies that securely develop and maintain their software products and services. This safe harbor will draw from current best practices for secure software development, such as the NIST Secure Software Development Framework. It also must evolve over time, incorporating new tools for secure software development, software transparency, and vulnerability discovery.

Despite some equivocating language, at its essence it is no small thing that the White House proposes: legislation instructing people on how to code their software and requiring adherence to those instructions. And such a proposal raises a number of concerns, including in both the method the government would use to prescribe how software be coded, and the dubious constitutionality of it being able to make such demands. While with this strategy document itself the government is not yet prescribing a specific way to code software, it contemplates that the government someday could. And it does so apparently without recognizing how significantly shaping it is for the government to have the ability to make such demands – and not necessarily for the better.

In terms of method, while the government isn’t necessarily suggesting that a regulator enforce requirements for software code, what it does propose is far from a light touch: allowing enforcement of coding requirements via liability – or, in other words, the ability of people to sue if software turns out to be vulnerable. But regulation via liability is still profoundly heavy-handed, perhaps even more so than regulator oversight would be. For instance, instead of a single regulator working from discrete criteria there will be myriad plaintiffs and courts interpreting the language however they understand it. Furthermore, litigation is notoriously expensive, even for a single case, let alone with potentially all those same myriad plaintiffs. We have seen all too many innovative companies be obliterated by litigation, as well as seen how the mere threat of litigation can chill the investment needed to bring new good ideas into reality. This proposal seems to reflect a naïve expectation that litigation will only follow where truly deserved, but we know from history that such restraint is rarely the rule.

True, the government does contemplate there being some tuning to dull the edge of the regulatory knife, particularly through the use of safe harbors, such that there are defenses that could protect software developers from being drained dry by unmeritorious litigation threats. But while the concept of a safe harbor may be a nice idea, they are hardly a panacea, because we’ve also seen how if you have to litigate whether they apply then there’s no point if they even do. In addition, even if it were possible to craft an adequately durable safe harbor, given the current appetite among policymakers to tear down the immunities and safe harbors we currently have, like Section 230 or the already porous DMCA, the assumption that policymakers will actually produce a sustainable liability regime with sufficiently strong defenses and not be prone to innovation-killing abuse is yet another unfortunately naïve expectation.

The way liability would attach under this proposal is also a big deal: through the creation of a duty of care for the software developer. (The cited paragraph refers to it as “standards of care,” but that phrasing implies a duty to adhere to them, and liability for when those standards are deviated from.) But concocting such a duty is problematic both practically and constitutionally, because at its core, what the government is threatening here is alarming: mandating how software is written. Not suggesting how software should ideally be written, nor enabling, encouraging, nor facilitating it to be written well, but instead using the force of law to demand how software be written.

It is so alarming because software is written, and it raises a significant First Amendment problem for the government to dictate how anything should be expressed, regardless how correct or well-intentioned the government may be. Like a book or newspaper, software is something that is also expressed through language and expressive choices; there is not just one correct way to write a program that does something, but rather an infinite number of big and little structural and language decisions made along the way. But this proposal basically ignores the creative aspect to software development (indeed, software is even treated as eligible for copyright protection as an original work of authorship). Instead it treats it more like a defectively-made toaster than a book or newspaper, replacing the independent expressive judgment of the software developer with the government’s. Courts have also recognized the expressive quality to software, so it would be quite a sea change if the Constitution somehow did not apply to this particular form of expression. And such a change would have huge implications, because cybersecurity is not the only reason that the government keeps proposing to regulate software design. The White House proposal would seem to bless all these attempts, no matter how ill-advised or facially censorial, by not even contemplating the constitutional hurdles any legal regime to regulate software design would need to hurdle.

It would still need to hurdle them even if the government truly knew best, which is a big if, even here, and not just because the government may lack adequate enough or current enough expertise. The proposal does contemplate a multi-stakeholder process to develop best practices, and there is nothing wrong in general with the government taking on some sort of facilitating role to help illuminate what these practices are and making sure software developers are aware of them – it may even be a good idea. The issue is not that there may be no such thing as any best practices for software development – obviously there are. But they are not necessarily one-size-fits-all or static; a best practice may depend on context, and constantly need to evolve to address new vectors of attack. But a distant regulator, and one inherently in a reactive posture, may not understand the particular needs of a particular software program’s userbase, nor the evolving challenges facing the developer. Which is a big reason why requiring adherence to any particular practice through the force of law is problematic, because it can effectively require software developers to make their code the government’s way rather than what is ultimately the best way for them and their users. Or at least put them in the position of having to defend their choices, which up until now the Constitution had let them make freely. And which would amount to a huge, unprecedented burden that threatens to chill software development altogether.

Such chilling is not an outcome the government should want to invite, and indeed, according to the strategy document itself, does not want. The irony with the software liability proposal is that it is inherently out-of-step with the overall thrust of the rest of the document, and even the third pillar it appears in itself, which proposes to foster better cybersecurity through the operation of more efficient markets. But imposing design liability would have the exact opposite effect on those markets. Even if well-resourced private entities (ex: large companies) might be able to find a way to persevere and navigate the regulatory requirements, small ones (including those potentially excluded from the stakeholder process establishing the requirements) may not be able to meet them, and individual people coding software are even less likely to. The strategy document refers to liability only on developers with market power, but every software developer has market power, including those individuals who voluntarily contribute to open source software projects, which provide software users with more choices. But those continued contributions will be deterred if those who make them can be liable for them. Ultimately software liability will result in fewer people writing code and consequently less software for the public to use. So far from making the software market work more efficiently through competitive pressure, imposing liability for software development will only remove options for consumers, and with it the competitive pressure the White House acknowledges is needed to prompt those who still produce software to do better. Meanwhile, those developers who remain will still be inhibited from innovating if that innovation can potentially put them out of compliance with whatever the law has so far managed to imagine.

Which raises another concern with the software liability proposal and how it undermines the rest of the otherwise reasonable strategy document. The fifth pillar the White House proposes is to “Forge International Partnerships to Pursue Shared Goals”:

The United States seeks a world where responsible state behavior in cyberspace is expected and rewarded and where irresponsible behavior is isolating and costly. To achieve this goal, we will continue to engage with countries working in opposition to our larger agenda on common problems while we build a broad coalition of nations working to maintain an open, free, global, interoperable, reliable, and secure Internet.

On its face, there is nothing wrong with this goal either, and it, too, may be a necessary one to effectively deal with what are generally global cybersecurity threats. But the EU is already moving ahead to empower bureaucratic agencies to decide how software should be written, yet without a First Amendment or equivalent understanding of the expressive interests such regulation might impact. Nor does there seem to be any meaningful understanding about how any such regulation will affect the entire software ecosystem, including open source, where authorship emerges from a community, rather than a private entity theoretically capable of accountability and compliance.

In fact, while the United States hasn’t yet actually specified requirements for design practices a software developer must comply with, the EU is already barreling down the path of prescriptive regulation over software, proposing a law that would task an agency to dictate what criteria software must comply with. (See this post by Bert Hubert for a helpful summary of its draft terms.) Like the White House, the EU confuses its stated goal of helping the software market work more efficiently with an attempt to control what can be in the market. For all the reasons that an attempt by the US stands to be counterproductive, so would EU efforts be, especially if born from a jurisdiction lacking a First Amendment or equivalent understanding of the expressive interests such regulation would impact. Thus it may turn out to be European bureaucrats that attempt to dictate the rules of the road for how software can be coded, but that means that it will be America’s job to try to prevent that damage, not double-down on it.

It is of course true that not everything software developers currently do is a good idea or even defensible. Some practices are dreadful and damaging. It isn’t wrong to be concerned about the collateral effects of ill-considered or sloppy coding practices or for the government to want to do something about it. But how regulators respond to these poor practices is just as important, if not more so, than that they respond, if they are going to make our digital environment better and more secure and not worse and less. There are a lot of good ideas in the strategy document for how to achieve this end, but imposing software design liability is not one of them.

Filed Under: 1st amendment, chilling effects, coding, computer security, cybersecurity, duty of care, innovation, liability, national cybersecurity strategy, software, standards of care, white house

Daily Deal: C# Coding Bootcamp

from the good-deals-on-cool-stuff dept

C# is a powerful, versatile and cross-platform language. Take the 11 course C# Coding Bootcamp and begin to master the basics. With 89+ hours of instruction, you’ll learn the basics of C#, how to create clean and efficient code, how to build apps in Microsoft Visual Studio, and so much more. Head over to the Techdirt Deals store and pick up this bundle for $69.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Filed Under: coding, daily deal, programming

Awesome Stuff: Tech Toys

from the all-play-and-some-work dept

This week, we’ve got a lineup of crowdfunded fun with three high-tech toys, only one of which is designed primarily for kids (and it’s the most mature and productive of the three).

FDL-1

The foam dart arms race continues with the FDL-1, which may be the most fearsome contender yet. It’s a high-power, fully-automatic robotic dart launcher that can be configured as a standalone turret or a handheld blaster. But the truly cool part is how it’s made: apart from the electronic guts, the entire thing can be produced with most average hobbyist 3D printers with a 6″ cube build size (not just high-end professional numbers). All of the schematics, instructions and software is open source and/or Creative Commons ShareAlike, so upon release the FDL-1 will be free and easy for anyone to build and modify. In the mean time, its 3D-printed construction also enables several ways to order one on Kickstarter at different tiers (though the prices of all three are high): as a 3D printing kit that includes components and filament, as an assembly kit with components and pre-printed pieces, or as a fully assembled unit.

Kamibot

Though I’m sure there are plenty of kids who wouldn’t mind getting their hands on an FDL-1, it’s a pretty advanced project with a price tag of several hundred dollars to boot. In the mean time, there’s the Kamibot: a papercraft robot kit designed to teach kids to code. To keep things at a beginner’s level, the robot itself is a single pre-made unit based on open source Arduino, with IR and ultrasound sensors, multicolor LEDs, and a single servo in addition to its dual-motor drive. It’s wirelessly controllable and, more importantly, highly programmable via a robust drag-and-drop “learn to code” interface. To keep things fun and interesting for kids, it also has a bunch of papercraft templates for building cool-looking skins on top of the robot itself, from tanks to Frankenstein.

Immersit

“Moving seats” that rise and fall and tilt and sway according to what’s on screen were a staple of Universal Studios when I went there as a kid, and if you’d asked me then (or yesterday, for that matter) whether that technology would be coming to the living room anytime soon, I’d probably have dismissed the possibility. Well, the Immersit has shown otherwise: it’s a home system that adds motion and vibration feedback for video games to just about any sofa. It works with PC, X-Box and Playstation and is preconfigured to respond to 120+ games, not to mention a whole bunch of movies (it works with plain old video, too). For games, the motion is based on various signals detected from the game, and can be configured at a granular level to change what motions go with what game actions. For movies, the team is using a combination of software and human adjustment to create motion codes for various movies; the Immersit detects the movie being played, and looks up the appropriate motion track. As with all such devices, it has to be tried to be properly evaluated, and I’d be pretty dubious about dropping $700+ on one without doing so — but the reviews from those who’ve had the chance are so far pretty positive.

Filed Under: 3d printing, awesome stuff, coding, robotics, toys

Techdirt Podcast Episode 6: Should Kids Be Forced To Learn Coding? Or Economics? Or Stats?

from the education-in-the-information-age dept

There’s been plenty of discussion online about whether or not kids should be taught “coding” as a core curriculum topic like math and reading. And there’s a compelling argument in this technological age that, at least, basic coding concepts are something everyone should know, just to be literate when it comes to many of the key work and life challenges we’ll be facing over the next few decades. But perhaps an equally compelling argument could be made for teaching economics. Or statistics. Or maybe even journalism. Or is it just that everyone wants kids to learn the things that they themselves do on a daily basis, because no one else seems to understand them? Maybe we should just teach problem solving. Or common sense. But how do you teach either of those things? And if we’re adding new subjects, which ones do we take away? Figuring out the education curriculum for the modern age isn’t quite as easy as we originally thought. Hersh, Dennis and I discuss these questions and more in this week’s episode.

Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes, or simply plug the RSS feed into your favorite podcatcher app. Of course, you can also keep up with all the latest episodes right here on Techdirt.

Filed Under: coding, economics, education, podcast, statistics

DailyDirt: Learning How To Do Math Like A Boss

from the urls-we-dig-up dept

Some people claim that they are not “math people” — that their brains just don’t understand mathematics that way “normal” people are supposed to learn it. Perhaps that’s true for some, but the subject of math seems to be taught in a way that tends to weed people out as concepts get more abstract. Educators are trying to figure out how to avoid making math lessons as painful as they might have been in the past (and hopefully not create any further torture with “new math” or even “newer new math”). Here are just a few links on changing the way these skills are taught.

If you’d like to read more awesome and interesting stuff, check out this unrelated (but not entirely random!) Techdirt post via StumbleUpon.

Filed Under: algebra, ans, approximate number system, calculus, coding, education, intuition, learning, math, number sense, preschoolers, programming, stem, teachers, teaching

California Cracking Down On Coding Bootcamps For Teaching Coding Without A License

from the licensing-insanity dept

A couple years ago, we wrote about the nutty situation in which state regulators for all sorts of industries are really doing more to simply stop competition, rather than any sort of “consumer protection.” This is not to say that there isn’t a role for regulation in protecting consumers. There may well be, but the more you look at how it works, the more you realize how the system is almost inevitably gamed to be about blocking upstarts and competitors. In the example in that story, we talked about a woman who got in trouble for braiding people’s hair without a “cosmetology” license.

Now we’ve got something happening in California that is even more related to things we’re interested in, though no less ridiculous. The California Bureau for Private Postsecondary Education (BPPE) has sent cease-and-desist letters to a bunch of organizations who run “learn to code” events, claiming that they’re teaching coding without a license and need to be shut down.

In mid-January, the Bureau for Private Postsecondary Education (BPPE) sent cease and desist letters to Hackbright Academy, Hack Reactor, App Academy, Zipfian Academy, and others. General Assembly confirmed that it began working with BPPE several months ago in order to achieve compliance.

BPPE, a unit in the California Department of Consumer Affairs, is arguing that the bootcamps fall under its jurisdiction and are subject to regulation. BPPE is charged with licensing and regulating postsecondary education in California, including academic as well as vocational training programs. It was created in 2010 by the California Private Postsecondary Education Act of 2009, a bill aimed at providing greater oversight of the more than 1,500 postsecondary schools operating in the state.

The intent here may be admirable. There are various scam “post secondary education” offerings that don’t really provide anyone anything of value and over promise what they’re offering. But coding bootcamps are something else entirely. The various groups are saying they’re interested in complying with whatever regulations are necessary, but are also worried about the cost and the time that it will take for this process to run its course. Bureaucracies aren’t known for their efficiency (or their inexpensiveness).

Filed Under: bppe, california, coding, coding bootcamps, licenses, regulations
Companies: app academy, hack reactor, hackbright academy, zipfian academy