ietf
[Top] [All Lists]

Re: bettering open source involvement

2016-08-02 09:47:57
The licensing discussion is really a distraction.   The GPL specifically
talks about fair use, and reading GPL code is not going to make you subject
to a lawsuit unless you copy it wholesale and claim ownership of the copy.

That you have seen a piece of GPL'd code that implements an IETF standard
does not mean you can never work on non-GPL software implementing that
standard again, any more than it would be the case that you'd be violating
a textbook author's copyright if you wrote some code based on an
understanding that you'd reached by reading the textbook.   That would make
going to college kind of a waste of time!

The utility of GPL code if you need non-GPL code is obviously an issue;
from that perspective, if somebody in the hackathon is working on GPL code,
and you want non-GPL code, then it would make sense for you to hack on a
different code base.   But this brain taint idea is nonsense, and there's a
lot of case law to back that up.

On Tue, Aug 2, 2016 at 10:37 AM, Charles Eckel (eckelcu) 
<eckelcu(_at_)cisco(_dot_)com>
wrote:

On 8/2/16, 2:09 AM, "ietf on behalf of Dave Taht" 
<ietf-bounces(_at_)ietf(_dot_)org
on behalf of dave(_dot_)taht(_at_)gmail(_dot_)com> wrote:


On Tue, Aug 2, 2016 at 1:12 AM, Eggert, Lars <lars(_at_)netapp(_dot_)com> 
wrote:
Hi,

On 2016-08-02, at 9:10, Dave Taht <dave(_dot_)taht(_at_)gmail(_dot_)com> 
wrote:
On Mon, Aug 1, 2016 at 4:36 PM, Eggert, Lars <lars(_at_)netapp(_dot_)com> 
wrote:
On 2016-08-01, at 15:44, Livingood, Jason <
Jason_Livingood(_at_)comcast(_dot_)com> wrote:
What if, in some future state, a given working group had a code
repository and the working group was chartered not just with developing the
standards but maintaining implementations of the code?

as an addition to developing specs, that might be useful, if the spec
remains the canonical standards output.

"Go read the code" is not a useful answer if the code comes under a
license (such as GPL) that taints the developer. (This is a major reason
what we are doing IETF specs for DCTCP and CUBIC - so that they can be
implemented without needing to read Linux kernel code.)

Only 10 (?) years after full support for cubic entered the linux
kernel, and 3 after dctcp.

The Linux community had chosen to actively ignore the IETF for about
ten years. This only changed relatively recently.

As one of the people that have "led" that invasion, I did it in part
because I felt that over the past 16 years many standards processes
had become equivalent to GIGO, and I still believed in running code
and rough consensus, and had nowhere else to go. Finding more ways for
all to work together to bring spaceship earth in for a safe landing
has always been a goal of mine.

And, FWIW, Hagen & friends' DCTCP implementation for Linux is based on
the initial versions of our DCTCP I-D, and arguably wouldn't have happened
without it.

CUBIC has of course existed in independent implementations before, but
it is unclear if the BSD licensed ones were actually only done based on
Injong's paper.

And cubic as it exists today in linux has continually evolved. I am
very grateful to google in particular for working within the ietf
standards process to make sure that many improvements to TCP in
general have been made public.

If you define the efforts of this standards body as one to produce BSD
licensed code (which is basically the case), it will continue to lag
behind the bleeding edge and continue to become more and more
irrelevant.

I guess we're getting on our soap boxes at this point? :-)

Believe it or not I am deeply ambivalent about all the "open source"
license schemes. For code critical to public safety and privacy in
particular I have called for "public source", and standards for well
maintained code,  available for inspection by as many as need to look,
under any license, including "kill yourself after reading".

We'd discussed this point in a public videoconference (against the
backdrop of the vw emissions scandal and the fight with the fcc over
the wifi router lockdown) at length, here:

https://plus.google.com/107942175615993706558/posts/9may7aHjnqF

We can always keep doing stuff like that, engaging more sides in the
debates, in the hope that more light than heat emerges. Increasingly
governments and regulators want a say.


But I don't define "the efforts of this standard body" in this way. I
remain convinced that textual specs are required.

Given the complexity collapse and explosion in text size in
translating code to spec, and the slow progress by which an rfc can be
evolved, updated, or discarded, I am less and less convinced.

Code is a nice addition, but really  only useful if it can be rather
freely used - which GPL code can't.

The LGPL is also out? (I am not being sarcastic, I would merely like
the ietf to list their approved licensing schemes)

Given the breadth of work done in the IETF, there is not going to be one
license that is appropriate in all cases. Code to add support for a new RFC
in the Linux Kernel would typically need to be GPL. Code to create a
completely new implementation of some experimental draft might be best
licensed with a BSD license. Code to add support for STIR to an existing
open source SIP implementation would likely need to adopt the license of
that open source project. Open source code could implement a protocol
stack, mystack, or it could add support for a given protocol to something,
perhaps using mystack. Perhaps the IETF can create some guidelines or have
some folks who help with license education and selection, but the
ultimately the choice of license will depend on the code contributors and
what it is they are contributing, or contributing to. My feeling is that
the IETF has the most impact when code is added to existing open source
projects to support evolving IETF standards. Creating open source code in a
vacuum to help people understand a draft and jumpstart their
implementations involving that draft are of course great too.

Cheers,
Charles


The effort to develop code that fits certain vendors' IP regime is
significant. I would support changes to the wg formation process that
were less vague than polling the room for "is there enough interest in
the room to do this". I would also like that all experiments' code at
least, that lead up to a standard's acceptance, be published. I have
lost endless months to dissecting papers and bad experiments - or
experiments where I merely wanted to change a few control variables
and re-run with my own data and tools.

For the record, flent is a GPLv3 *wrapper* around a multiple other
tools. It's GPLv3'd, in part, because we'd hoped to make sure that
experiments published with it, did not game the results in any way.
Using it does not "taint" anyone. Modifying the tests, does.



It's not just the deployed code in kernels that is a problem, it is
also that the best of the tools available to prototype new network
code are GPL'd. NS3, for example, is gpl.  The routing protocols
incorporated in bird and quagga are GPL. Bind is BSD, but nominum is
proprietary and dnsmasq, GPLd.

There is increasingly no place to design, develop, and test new stuff
without starting from a gpl base.

I agree that this is a problem. But we can't all start to use GPL for
everything.

Just as Apple found it necessary to invest in a BSD licensed compiler,
orgs that wish to have BSD licensed "open source" code that can
compete with GPL'd versions, need to invest in tools, tests, and
developers.


Worse, what happens here at ietf without use of these tools, is that
we end up with non-open-source code's experiments and results being
presented, without any means for an independent experimenter to
verify, reproduce, or extend.

That's a stretch. The alternative to GPL is not closed source. There
are other, friendlier OSS licenses around.

And insufficient developers.

I think it would do a lot of semantic good if the ietf would stop
referring to "open source"[1] and always refer directly to the
licenses under which the code it works on that are allowed. There are
certainly new areas of interest like npv, etc, that are proceeding
with more vendor-friendly code licensing schemes, although I am
dubious about the performance benefits of moving all this stuff into
userspace, particularly when a seeming, primary, goal is to avoid
making free software, rather than engineering a good, clean, correct
engineering solution.

It has been my hope that since the alice decision re patents (80% of
disputed software patents being invalidated), the rise of
organizations offering patent pool protections like the open
inventions network, and I think (IANAL), that apis cannot be
copyrighted in google vs oracle - ends up meaning that a developer can
not longer be polluted merely by looking at GPL'd code once in a
while. Because we do.

As much as I want to agree, if you work for a commercial entity, the
risk is just too great (cf. the GPL clause regarding implicit licenses to
patents).

What can be done to reduce that risk? I already pointed to oin (both
google and cisco are part of it - there is now quite a large number of
members, actually:

http://www.openinventionnetwork.com/community-of-licensees/


The actual implementations of anything for anything else will tend to
vary so much due to API differences, and the expressible logic in the
algorithms themselves generally simple, that, particularly when the
authors of the code have presented it for standardization, under any
license, that the exposure to further risk is minimized.

Sure. But the risk is incorporating code that may be GPL-tainted into
non GPL'ed code bases. In other words, it's not the code itself that is a
risk, it is a risk for the codebase it is used from.

You gotta rewrite it, so what? Copy/paste is a problem for all licenses.

There are powerful advantages to the GPL (and LGPL[2]) over
"standardization". Notably there is an implicit patent grant, and
ongoing maintenance is enforced by an equal spirit of co-operation.
It's a better starting point than to hang with a sword of Damocles
over your head wondering if someone will patent something out from
under you.

That's certainly one viewpoint.

Yep.

Lars

I wish we could just get on with making the internet a better place.

Sorry, but I really don't understand how this discussion is not trying
to help with just that?

Lars



--
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org