Meetings,  Recordings

Initial Meeting Recording

The audio recording from the initial meeting on the 24th of August 2018 has been uploaded to: https://youtu.be/K-m5lhUjbnU

4 Comments

  • skreutzer

    This is so incredibly amazing: to have it, to have it on YouTube, and being public! One advantage is that we get the automatically created transcript from Google/YouTube (I guess I can’t, but you as the uploader can download it in it’s entirety, even in quite usable formats, so that’s something we could work with/on). I would very much like to do the TimeBrowser thing in parallel (as those recordings are material and proceedings of the FTI and part of the group’s Journal and research “results”), but with the first, basic technical aspects of recording, publishing and transcribing already solved, the social/legal problems also need to be addressed: can the implicitly assumed agreement of the recording being published be established with all of the participants (usually you inform them at the beginning or end of a recording, and everybody who joins later, that they should express consent or object otherwise in the call itself)? You put it under the standard YouTube license, which is a restrictive license, can we get explicit permission from all participants to put the recording under the Creative Commons BY-SA 4.0 (maybe 3.0 as well for compatibility with the large body under 3.0 like Wikipedia, maybe with a “+ any later version” clause to get a path to upgrading, but then there’s not too much trust towards Creative Commons that they will do “the good thing” with future licenses as they didn’t explicitly express so and also house/advocate restrictive licenses)? YouTube by the way only supports the CC BY (3.0?). Those are questions regarding the human side policies, and while people tend to play nicely (granting each other blanket permissions to do things with the commons of the community) when it comes to collaboratively build software in large groups or writing an online encyclopedia, maybe they’re more hesitant when it comes to their communication, writing, the handbook because they feel less pain and restrictions if all that we’re doing is only passively reading/consuming it, not doing any actual knowledge work and augmentation besides using the words to manipulate our brains/thinking.

    There’s not much incentive to do the TimeBrowser thing with this recording if it can be taken away from with-under me, or if I’m legally prevented to do reasonable things (like re-publication, annotation, etc. etc.), so it would be of little to no use other than to use it as example material for developing the capability, for which I could use any other recording on YouTube just as well (including my own uploads which need to be worked on in that manner, it’s just not containing dialogue, that’s all).

    Copyright (C) 2018 Stephan Kreutzer. This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

  • skreutzer

    5:20 Having participated in the Doug@50 group (but not the earlier MOAD group [1, 2]), I don’t understand why Houria Iderkou should be frustrated, I also don’t think that the Doug@50 group got hijacked by the knowledge graph community, my impression is that it was a deliberate group decision based on the existing components the participants had already, so if there should be an potentially amazing, groundbreaking result for the demo day on 2018-12-09, the 50th anniversary of Doug Engelbart’s great demo, that was and still is the most reasonable thing to do. It’s designed to be a result in Doug’s spirit, less concerned about recreating the past or doing long, basic research as the Augmentation Research Center did, but more focusing on what’s needed for solving urgent, complex world problems. I personally are suspicious about machines understanding knowledge, be it for reasoning or learning or federating, I don’t care much about the exact identity of topics/concepts, but more about the semantic categories/types at first to interpret and apply ViewSpecs on top of them, as the first step to improve from there. I’m uncomfortable with servers doing the curation, I lament the lack of tools for knowledge workers to work on the material manually, as it’s subject of social, human discourse. I want/need capabilities to work with basic, plain text (at first). The richness in that space of earlier days has been lost, as it seems to me. Anyhow, it’s not that I have a lot of knowledge-supporting tools already and there was no way to connect my stuff with the web model or to bootstrap something for linked data platforms. I also discovered that Frode probably doesn’t share this perspective, the document world might be a separate one just as well.

    Copyright (C) 2018 Stephan Kreutzer. This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

  • skreutzer

    4:48 Hmm, I wonder if we can promise that it will work yet. We have examples and indications from earlier systems that it might, but there is no guarantee that we can pull it off technologically or socio-politically. If it would be less risky, the web wouldn’t be the only paradigm around. Maybe we can declare victory if we can get something small working for ourselves, to solve some of our own problems, and then go from there, or also see it as a success to find out and learn precisely why it can’t work? I don’t want to constrict the ambition without need, but to be realistic and prevent disappointment, it’s not that I managed to forsee the future and know exactly what will happen. We’re just starting, have many and potentially different ideas, so open-end research/experimentation is needed to figure things out. If it is about the 50th anniversary of Doug Engelbarts great demo, what can we do in only three months that in any way resembles an aspect of Doug’s vision or approach? I’m a little suspicious, from experience things turn out to be harder than they look at first. On the other hand, we can’t allow worries to damage the confidence we need to pretend to have as some kind of a mental trick, just to be able to do anything in the field at all and thus leading to progress rather than failure hopefully.

    Copyright (C) 2018 Stephan Kreutzer. This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

  • skreutzer

    7:55 I failed miserably to describe what I mean by “complex” and assumed that everybody in the hypertext field has some kind of glimpse what it is, but to make sure that listeners can get an idea of what category of problems I’m referring to, I tried to write a rough introduction to the notion. It’s a major motive for Engelbart’s and Nelson’s work. Why do I feel to be confronted by a problem of complexity for a short introduction or elevator pitch about the things I’m doing or looking at? I’ve developed a bunch of small tools that do different tasks but also interact with each other in various ways, I’ve learned about many different aspects of text and publishing, I know what computers and networks are and can do, I’m aware of the mess we’ve created, try to look at all sorts of different things going on in past, present, future and in parallel, so when asked what I did, do or want to, I have no clue where to start and where to end, to filter what might be relevant and what might not, in order to fit a description in a short time span. Whatever I say, focus can shift tomorrow if there’s a good reason/opportunity, as I try to keep my stuff flexible enough anyway. It’s probably best to state the “values” I care most about: libre licensing, semantic net, open standards, avoiding dependencies, curation, offline usage, posterity, augmented reality for everybody. In effect, I care most about the universal library and am puzzled why we’ve allowed ourselves to not have it already. To go into more detail easily leads to immense complexity as it is the nature of networked, distributed structures with many aspects to every single node. So I have already several useful pieces to approach the gargantuan task, but they’re not made coherent yet into a capability infrastructure. Instead of describing what I did so far, it’s more interesting to discuss what we might build, and of course previous results might integrate, but as the problems with text aren’t solved yet to our satisfaction, there’s obviously still something missing. What it might be and how it might work is of plentiful complexity of its own, and one may frame it as a standard for text systems what POSIX is for operating systems, but we can easily suspect that there are many social, legal and technical implications that need to be worked on in order to make it happen.

    Talking about it (this topic or another topic with a sufficient number of interrelated aspects) under a linear time restriction while still giving an accurate overview that’s not totally misleading is a problem of complexity I don’t have a tool for to deal with. The fact of my failure to show all of text, writing, publishing, hypertext, my interpretation of it and the views expressed by others, in a few sentences is a good demonstration of where I see this problem.

    This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.

Leave a Reply to skreutzer Cancel reply

Your email address will not be published. Required fields are marked *