Anonymous 2016/0/19/19:54:22 No.499795

File: 1453226062349.png

"IPFS is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository"

Share your cool ipfs hashes

Anonymous 2016/0/19/19:58:12 No.499799




Anonymous 2016/0/19/20:11:24 No.499808


What do you recommend? Fucking assembly?

Anonymous 2016/0/19/20:41:49 No.499826


>uses .io domain

of course it's going to be shit

Anonymous 2016/0/19/20:58:58 No.499834


Considering what issues the project addresses, it makes perfect sense for it to use .io domain.

You know, because I/O. And the project is about the structure of the Internet at the moment and how everyone has to connect to a server to get content instead of distributing the content, etc.

Anonymous 2016/0/19/21:18:49 No.499853


io is a fucking meme domain used by scripkiddiesat this point

Anonymous 2016/0/19/21:30:7 No.499857

File: 1453231807650.jpg


>judging a piece of software by what domain it is using

Anonymous 2016/0/19/22:5:51 No.499878


IPFS is a filesystem you retard, you can write it in anything you want. There are other implementations planned like js-ipfs or py-ipfs.

Anonymous 2016/0/19/22:9:50 No.499879


>Considering what issues the project addresses, it makes perfect sense for it to use a British Indian Ocean Territory domain

Anonymous 2016/0/19/22:28:30 No.499888


>js-ipfs or py-ipfs

Here's a question. Why didn't it start there?

Answer: project maintainer(s) is/are insane.

Anonymous 2016/0/19/22:55:24 No.499908

Last I heard it's been updated to version 0.4.0, what has been fixed/added since last version and what can we expect for the next version?

Anonymous 2016/0/19/23:46:36 No.499955


Are you saying that Python and Javascript would be a saner choice for this?

Anonymous 2016/0/19/23:50:9 No.499958


Are you saying Go is a saner choice than those?

plz, anon

Anonymous 2016/0/20/0:3:7 No.499965


I think Go is nice. What is it you don't like about it?

The thing is, scripting languages are not that good for anything intensive. They have their uses. Javascript runs in web browsers, and Python is easily portable. But for serious use, you want these things to run fast.

Anonymous 2016/0/20/1:5:47 No.500002

waiting for tor and i2p support. dev says he'll focus on it on february.



it's just the reference implementation of the protocol. and there are some projects for other languages.


>using python or javascript for a program that's supposed to become a core infrastructure piece

Anonymous 2016/0/20/2:36:30 No.500083


yes, you fucktard

Anonymous 2016/0/20/3:18:48 No.500104

>still no i2p support

Lack of anonymity is the only thing keeping ipfs from becoming popular.

Anonymous 2016/0/20/9:54:51 No.500292

OP you should really include more links.

This script changes all ipfs hashes into links to the ipfs gateway (on page loads)

These redirect all gateway links to your local daemon, it works well with the previous script.

Here's some general propaganda (I genuinely think this is a good article that conveys some good things about IPFS)

Can Go complaints be directed to the Go thread >>493136

We've had some nice IPFS threads in the past months with lots of tech talk and file sharing going on.


A lot of performance stuff, uses less memory, does a lot of operations faster, and apparently fixes some network issues, some people were saying they couldn't get some hashes in the previous thread but could with .4. Outside of that I think there are some functionality things like more arguments and some more config options.


It's being worked on iirc so it's at least planned, I believe they want to interoperate with a lot of different things so that people can use what they want for what they need like i2p for privacy but if someone doesn't trust i2p they could use whatever they wanted like tor or something else.

Anonymous 2016/0/21/5:34:58 No.500889

So, IPFS booru, how could that work? The simplest way I can think of would be to save hashes and tags to a central website, and just retrieve the images with ipfs, but could this be completely decentralized?

Anonymous 2016/0/21/6:0:47 No.500909


What you said seems to be the easiest right now, handling dynamic content seems to be something they're working on with ipns and some other thing (ipld?) but it's not all finalized yet.

I've seen people use typical http for all the dynamic stuff and ipfs for all the more permanent things like the html, js, images, etc., there's also examples of people using Ethereum with IPFS to handle dynamic content but I don't know much about that though. Someone had a site hosted via ipns that displayed a number and a form entry box, you could submit a new number and the site would change (ipns record would update), the new number would be displayed for everyone who visited the ipns hash, it used Ethereum to handle the number somehow. If someone has the hash to that please post it.

This guy wants to make some kind of imageboard-like thing too so maybe it's worth looking into.

Anonymous 2016/0/21/6:4:45 No.500911

File: 1453349087444.png



I forgot to mention >>>/hydrus/ is planning on integrating IPFS, The hydrus network is essentially a local booru that syncs up with repositories, the repos can contain tags, files, and more, this seems like the best option we'll get in the short term and maybe even best long term. You'll run a client with your media, sync remote tags over ipfs and distribute files via ipfs as well.

Anonymous 2016/0/21/7:52:46 No.500936


Nice. Has it always supported automatic tagging based on hash?

Anonymous 2016/0/21/7:59:50 No.500941


That's the basis of the project, it takes in media, hashes it, and you can assign tags to that hash and have relations with tags (parent and sibling).

The tagging is not automatic it's just shared if you use the public tag repository, if me and you have the same file and one of us tags it publicly (local and remote tags are kept separate) then that tag will show up for both of us eventually (after you sync the repo).

You can automatically assign tags based on things like filenames and other factors but it's not magic, however there is this which is planned which is actually automatic tagging


Anonymous 2016/0/22/20:20:15 No.502055


You can already use it over Tor and CJDNS. The i2p thing is just creating a pure TCP mode.

Anonymous 2016/0/22/20:31:11 No.502060


There's a site called Hiddenbooru on i2p

Anonymous 2016/0/22/23:56:23 No.502189


>using the best language after maybe C

seems like an advantage to me

Anonymous 2016/0/23/0:5:23 No.502201

still no Tor or I2P support and no one uses it. IPFS will forever be a meme.

Anonymous 2016/0/23/0:6:23 No.502202


Go > C > *

Anonymous 2016/0/23/5:20:32 No.502373

File: 1453519233731.png

I just got my personal file host working which adds all files to ipfs while also giving me a http link to share with normies.

Anonymous 2016/0/24/6:30:31 No.503167

Has anyone done a proper comparison of 0.3 vs 0.4?

Anonymous 2016/0/24/8:41:13 No.503231

>new piece of technology still in alpha and being actively developed as a proof of concept


If you hate Go so much, go organize an IPFS implementation in a real language. Until then stop bikeshedding and go start another browser thread or something.

Anonymous 2016/0/24/21:52:57 No.503599

If I leave my computer off for, say, two weeks and I turn it back on after a node runs a cleanup, do I have to do anything to re-enable the content I've added? Or does the daemon auto-pin the content to the node again?

Anonymous 2016/0/25/10:25:59 No.504559


When you add content yourself you're also pinning it, so it is still pinned whenever you start your node again.

Anonymous 2016/0/25/12:11:6 No.504588


Go is one of the best languages. Ah those nice, statically compiled binaries. And it has a great toolset. Finally the compilation process is invoked with a simple command and not 30 automake scripts.

Anonymous 2016/0/26/13:3:22 No.505540


I think the more important aspect will be the JS impl.

Once that's out, people will start using it as a matter of course.

Anonymous 2016/0/29/0:8:27 No.508083

>not written in ocaml

It's shit fam.

Anonymous 2016/0/29/0:9:20 No.508084

Anonymous 2016/0/29/2:47:1 No.508198

I just ran into this, so theatrically a file can never be deleted in the file system ... theatrically.

Anonymous 2016/0/29/3:3:31 No.508209



I laughed.

Anonymous 2016/0/29/3:21:29 No.508220


fuck i should stop using my phone for this shit

Anonymous 2016/0/29/13:55:49 No.508523


not really; keep reading

Anonymous 2016/0/30/12:37:21 No.509350


wot? i2p doesn't mandate TCP, tor does.

Anonymous 2016/0/31/21:23:32 No.510525

Hey guys, I am thinking of implementing this as a fuse mount and read it as a p2p trivial deb repo. Anyone tried this? Any problems you can foresee? Are there any more mature projects as an alternative to this?

Anonymous 2016/0/31/21:52:21 No.510557



jej. thanks for the warning. and it seems to be written by a bunch of nobodies. I still haven't got an answer to why I should study this instead of Freenet.

Anonymous 2016/0/31/22:43:54 No.510614


what is the link?

Anonymous 2016/0/31/23:22:28 No.510669


Coming this summer

Anon Shares A File

An ancient evil is about to be uploaded so you better hang on to your keyboards if you want to keep your daemons running!

Anonymous 2016/1/1/0:27:50 No.510706


>Anyone tried this?

Tried once adding the gentoo portage tree, but it would have taken too long.

>Any problems you can foresee?

Adding lots of files will probably be slow, you should try with 0.4-dev which should improve performance a lot.

One of the nice things that can be done is calling apt-get clean regularly or putting /var/cache/apt/archives/ on a tmpfs, since files are already cached by IPFS

>Are there any more mature projects as an alternative to this?

No, unless you count apt-torrent, debtorrent, apt-p2p, which were even more unstable than IPFS and had plenty of drawbacks.


>java enterprise abstractorfactoryfactories, dating back to to 2000

>golang split into dozens of repos, most documentation incomplete

Pick your poison

Anonymous 2016/1/1/2:56:41 No.510805


Tight stuff. Thank you anon.

Anonymous 2016/1/1/4:24:43 No.510845


because freenet is for websites and this is something completely different



the feds will soon control this

Anonymous 2016/1/1/20:44:58 No.511361



Arch can be easily configured to pull packages from IPFS, I bet it wouldn't be hard to do the same for Debian repos.

Anonymous 2016/1/2/5:8:26 No.511684

Could a chan be made to work with IPFS?

My thought was to have normal SQL Database, etc, with post-content (or alternatively, have each post stored as an IPFS address that is loaded via JS or some shit if that's expensive to store too) and then the user's themselves could contribute to the hosting of image/video content.

As IPFS caches the files it loads, the user's of any particular board would then be contributors to the hosting of that board's content.

Could something like this work?

InfinityIPFS. We could get Josh to build it.

Anonymous 2016/1/2/5:12:23 No.511687

fucking retards dont know how to use tor :)

Anonymous 2016/1/2/5:16:46 No.511691


tor isn't made for sharing large files you fag

Anonymous 2016/1/2/5:17:58 No.511692


Fucking faggot.

Anonymous 2016/1/2/6:4:39 No.511725


See >>500909

There's a handful of other ideas floating around of how to do it, search for discussion around how to use IPFS with/as a database or other existing systems.

Hotwheels mentioned before that he was lurking one of the IPFS threads and may be interested in looking into it, granted this is still alpha so I highly doubt he would (and I don't recommend he does) use it now.

There are big things that need to be done before this rolls out for something as big and dynamic as a populare imageboard, it's mostly for long term static content right now but the more dynamic things are in progress, IPNS with pub sub as well as clientside resource limits would have to be finished before this should be deployed on this scale. Another thing would be figuring out how to accommodate non-ipfs users, 8ch could host its own gateway and have IPFS users just redirect to localhost like the addons do for the official gateway (or use a hostfile), a more practical solution would probably to use the javascript implementation when that's done but I know nothing about that, I'm making a baseless assumption that it would be resource heavy and slow to run some kind of instance for each tab, maybe it doesn't have to I have no idea how browsers/js works really.

Anonymous 2016/1/3/1:55:16 No.512278


here's all 3 episodes of Boku no Pico

Boku No Pico Episode 1 - ipfs/QmbTWVLtUhdLJws4reyJ7CnkVwwivR4FTM3Jnj9YebNhBu

Boku No Pico Episode 2 - ipfs/QmeTUFENeJJjcN617m3Twd5kCdcTnoyZKHANkZ7NnYe2de

Boku No Pico Episode 3 - ipfs/QmXy6yZAwwtmQc44t7sy7ivndqdHMCBUHu3vbrsih5WzjG

Anonymous 2016/1/3/1:58:40 No.512282


single directory link instead: ipfs/QmeCq2H2w2tJ9Yr8AmLi7bjkopdNpaB5LaG9fZRy52Q4Ts

Anonymous 2016/1/3/3:3:26 No.512321



my shitty upload is triple of download from the very start. seeeeeeeeed plz!!!!11

also yes, please put them in folders and name them properly because mpv couldn't figure out what the fuck those are. that's


>in current year

after all.

Anonymous 2016/1/3/3:45:24 No.512346


>tor isn't made for sharing large files you fag

wrong. making shitloads of connections to different addresses is what causes problem on tor.

you can download files as big as you want using things like http or ftp without any extra load.


you're still cancer tho and your priority will automatically lower if you put too much load.


i tried it here it worked:




pulling packages is trivial: just add an ipfs address as a repo. the issue is how to best publish and update the repo. there are several ways.

Anonymous 2016/1/3/3:54:24 No.512351




>implementing this as a fuse mount

it already can do that.

Anonymous 2016/1/3/4:26:5 No.512383

>the issue is how to best publish and update the repo

IPNS seems like it would be the best way but they're still working on that. IPNS works now but there are limitations being worked out.

Anonymous 2016/1/3/6:19:24 No.512459


I just have really shitty internet

Anonymous 2016/1/3/7:58:46 No.512506


kill yourself. Literally wat. No typing in Javascript. Poor performance for infrastructure needing performance.

Anonymous 2016/1/3/10:15:34 No.512560


what is type inference you unadaptable cunt

Anonymous 2016/1/4/13:24:38 No.513401


>pulling packages is trivial: just add an ipfs address as a repo. the issue is how to best publish and update the repo. there are several ways

Deb guy here, this is why I mentioned the fuse mount. If the packages appear as a local directory which anyone can add to then a package db(I think it's a db) can be made locally to reflect available files.

So the /apt/sources.list would read

deb file:/usr/local/ipfsfuse/debs ./

Then the repo update script can be edited to include

dpkg-scanpackages /usr/local/ipfsfuse /dev/null

So the packages are made available to the package manager just by updating the repos.

In addition we can give it a very low pin priority to ensure it doesn't pull system updates from there.

Anonymous 2016/1/5/1:13:25 No.513805

File: 1454627606464.png

reposting my touhou link:



downloaded torrent & manually added, now should seed properly

Anonymous 2016/1/5/2:24:21 No.513844


freenet hosts arbitrary files. All I know about ipfs is that it hosts files


>Pick your poison

Not that I care what PL it's written in, but why would you want Go *and* Java instead of just Java...

Anonymous 2016/1/5/9:31:7 No.514087

File: 1454657467525.png

>add a large file

>run out of space while hashing

Just kill me now.

Anonymous 2016/1/5/11:13:9 No.514123

My files aren't showing up on the network. If my go-ipfs version is too old, does it just ignore my pins?

Anonymous 2016/1/5/11:33:31 No.514128


>Statically compiled

Please die.

Anonymous 2016/1/5/13:39:39 No.514179


I can't wait for pin in place, I get why you'd want to make a copy of it but there's no way I'm keeping 2 copies of everything I want to share. At least its on the issue list

As soon as this is done I'm sharing my entire media drive. It will be like DC all over again but so much better.


What do you mean not showing up? The only issue I can think of is that 3.x peers can't communicate with 4.x peers, if both of your endpoints are on the same version they should work fine. The public gateway should currently handle either though.

Make sure your daemon is running I guess.

Anonymous 2016/1/5/14:9:1 No.514198

 ~ $ eix ipfs
No matches found
~ $ eix -R ipfs
No matches found

Sweet, I don't even have to throw it in the trash myself.

Anonymous 2016/1/5/14:26:44 No.514208


I really don't get why you'd install something just to uninstall it, if you think it's trash why even bother in the first place?

Anonymous 2016/1/6/5:37:45 No.514754

File: 1454729865516.jpg

Hello /tech/, /a/ here. How difficult will be to build a tracker on IPFS and if it would possible at all?

Anonymous 2016/1/6/6:16:23 No.514766


Doesn't seem that hard at the most basic level. You submit a hash to the tracker with comments about it, and the tracker displays whatever metadata it can find with it and indexes it for searches. Pretty much anyone can do that with a standard torrent tracker website template. As for actually tracking, I don't think you can keep track of how many people are seeding/leeching on IPFS. Could be wrong, though.

If it was really ballsy, it could have an option to download a local copy (below a certain size) and spit out some video screencaps if it detects a video. It would have to delete the file immediately afterwards but it's a way of verifying that the file is legitimate without having to trust the uploader. But that's just a pipedream for a widely-adopted IPFS future.

Anonymous 2016/1/6/6:19:29 No.514767


Check this

You don't need a peer tracker with IPFS, IPFS tracks all that itself. The only thing needed is a an index of IPFS hashes, what most people call a torrent tracker is in fact an index that also runs a tracker on the same domain, with IPFS you only need the index part.

tl;dr you just need some kind of text file that says hash X = series Y in format Z

Ideally you'd have rich searching too. You could more than likely modify whatever existing torrent tracker site frontend you want to just point to ipfs hashes instead of torrent ones after stripping out all the ratio stuff, etc.

Long term I really hope people use IPNS with mount points, imagine you want to watch Series X, some release group can go "here's this ipns (not ipfs) hash", you mount that hash to your media drive like ~/Anime/Series X/ then when the release group puts out an episode it would automatically be pushed to that directory as episodes are released. With pubsub this should be possible.

Anonymous 2016/1/6/6:28:43 No.514774


Is their DMCA infrastructure a problem for the network, or just a problem when fetching from the web?

If it is a problem, would it be possible to have a fork of the network? Like a private ipfs network bootstrap? Sort of like a private bittorrent tracker?

Anonymous 2016/1/6/6:31:40 No.514777


nobody can guess a hash of your file. so you would just share privately. as for security I would just have something similar to a seedbox to prevent people from getting your ip and contacting an ISP

Anonymous 2016/1/6/6:31:48 No.514778


But unlike a private tracker there's nothing stopping nodes from outside connecting to your swarm and then fucking it all up.

Anonymous 2016/1/6/6:40:44 No.514781


To be honest there's not much that can be done to take anything on IPFS down, the only current DMCA solution is to have an opt in blacklist for gateways, gateways are only useful for people not running ipfs themselves. If you're running ipfs yourself and you want a hash that is reachable then you're going to be able to get the content, it can't really be prevented. Kind of like torrents and their hashes, you'd have to take down all the peers hosting it.

>If it is a problem, would it be possible to have a fork of the network? Like a private ipfs network bootstrap? Sort of like a private bittorrent tracker?

Yes, you can do that now, you can choose your own bootstrap nodes and as such run private IPFS networks but there's no real reason to that I can think of outside of bandwidth considerations but IPFS will support resource limits natively eventually so this should be a non-issue later.

As for private sharing like >>514777 said there's no harm in being exposed to the network with private content since nobody can retrieve it unless they know about it anyway.


I forget but I think there's some way of preventing this, I remember PerfectDark having a simillar "issue" but they treated it like a benefit. I agree with that too, everyone should be connected to everyone else, it keeps the network resilient AND fast when everyone shares with everyone as fast as possible, a race to idle kind of thing.

Anonymous 2016/1/6/7:4:4 No.514789

File: 1454735044612.png

>add .webm video to IPFS

>load it through gateway to test its availability

>play it through Icecat and it doesn't work at all (file is corrupt)

>play it through Firefox and the subtitles are gone

>takes about five years to launch it in mpv because it needs to buffer the whole fucking video before playing for some reason, even though the video is minutes long

Is it my encoding? Is there any way to stream chinese cartoons to other people without hardcoding the subs?

Anonymous 2016/1/6/10:38:17 No.514843

File: 1454747898064.webm


Did you test the file in your browser locally? If the subtitles don't show there then you broke it yourself on encoding. Files work fine for me even through the gateway, with that said though the gateway is a backup solution, ipfs obviously works best client to client.

Anonymous 2016/1/6/23:59:59 No.515087


They plan on adding support for private blocks


Why not just use the http gateway URL in sources.list?

Also running apt-get clean regularly would free some space, since packages are already cached by IPFS.

Creating a mirror from downloaded packeges would be pretty cool.

Anonymous 2016/1/7/0:5:28 No.515091


I've since decided to just bake the subtitles in. My purpose is for normalfags to be able to watch it with their shitty browsers, so the compatibility's gotta be high.

Since that post I've been able to serve it up better by shrinking the file size. Still get the corrupt error in Icecat but I figure anyone using Icecat is also competent enough to launch it through some other means like mpv.

Anonymous 2016/1/7/9:26:56 No.515352


Of course, what else should we use to write a filesystem? :^)

Anonymous 2016/1/7/9:29:20 No.515355


Normalfags don't care about whether their animu is in high quality or not, they just care about It Just Works, Click Play and Enjoy and quality not being total and complete shit. They basically just want a clandestine Jewtube/Netflix.

Anonymous 2016/1/7/10:30:35 No.515388


One could try extracting the srt/vtt files and embed them withing <track> tags and some javascript/redirect to switch from default.

Anonymous 2016/1/7/11:46:12 No.515415


On ipfs add, try using the -t flag, see if it makes it any better, it uses the trickle-dag format which should be better for media files.


and the older:

For more info on the format.

ipfs add -t *files or directory*

I'm not sure but I think it's less efficient (uses more data) but more resilient. I could be wrong though.

As for the subtitle track, that shouldn't be a problem with IPFS, that sounds like a browser or encoding issue since an HTML5 compliant browser should support subtitle tracks in video files and video tags. I haven't experimented with those too much in browser myself yet.

Anonymous 2016/1/7/11:48:31 No.515417

File: 1454838511858.png


>file system that seeks to connect all computing devices

>everyone is connected to everyone

The ultimate botnet?

Anonymous 2016/1/7/15:31:25 No.515476


i want to join it tbh.


>I've since decided to just bake the subtitles in.

absolutely haram. why would you pander to normies on a animu on an experimental tech? wtf?




you realize that normies use Chrome based browsers, right. They don't give a shit about Firecuck, which is good because FF sucks.

>takes about five years to launch it in mpv because it needs to buffer the whole fucking video before playing for some reason, even though the video is minutes long

did you put just 1 keyframe for muh filesize and expect it to seek? yeah sounds like shit encoding.


>Also running apt-get clean regularly would free some space, since packages are already cached by IPFS.

No, you don't want apt to cache by default at all in that case.

Anonymous 2016/1/7/18:11:8 No.515561


>statically compiled binaries

t-this is a joke...r-right?

Anonymous 2016/1/7/18:14:57 No.515564


you realize no one cares about normies here, right?

Anonymous 2016/1/7/18:22:49 No.515571


Wasn't this one of the Plan9 goals? That's the world I want to live in, global distributed file systems for public files and private clusters for private files. You can't beat this scale of redundancy and ease of replication, over a network even. They should add some form of verification/integrity checking, that would be great. Given that you can dump a list of all the content you have you could do it now in a crude way that would repair any corrupt by just downloading everything you already have to /dev/null, it will fetch any part that's broken, there should still be some built in official solution for this though if they want to be a filesystem imo since they rely on the underlying fs to do it for them now. Maybe it's a long term goal, who knows.


Anonymous 2016/1/7/18:49:20 No.515587


I want an ipfs-to-9p bridge that I can just run as a daemon and `mount -t 9p` from

Wake me in 2030 when it's done

Anonymous 2016/1/7/20:13:25 No.515628


I only know a little about 9p, I meant to look into it more. Since IPFS can be mounted via fuse and apparently even via Dokan(y) on Windows, what would bridging to 9p allow you to do that you can't accomplish already?

I wonder if you could just rewrite the existing mounting portions and use some 9p Go lib for what you want.

Anonymous 2016/1/7/21:54:43 No.515718


>what would bridging to 9p allow you to do that you can't accomplish already?

Run the bridge on one computer, now your whole LAN can access IPFS without installing anything and you don't even have to touch NFS's bullshit with a 10 mile pole.

Anonymous 2016/1/8/9:16:18 No.516151


It isn't and there is no way to do dynamic linking nor create shared libraries out of go libraries. Neither can Nim. Rust can, but you have to jump through multiple hoops due to cargo not supporting it correctly.

Why all these languages aren't just GCC/LLVM frontends is something I still fail to comprehend.

Anonymous 2016/1/8/15:5:9 No.516279


I'm not positive but I'm pretty sure you can do dynamic linking in Go via gcc, the standard Go compiler eventually added a way to do this in 1.5 as well.

Anonymous 2016/1/9/1:40:19 No.516891


>tfw I uploaded QmVBEScm197eQiqgUpstf9baFAaEnhQCgzHKiXnkCoED2c

Here, add this to your list too: /ipfs/QmVYLYeFLvxEEV6qFP8SHH4kP8VJb4Py4vpRkgqH8Hjyfx

It doesn't want to play in-browser for some reason. Probably because it's 10-bit :^)


I've never got WebM subs to work in any browser, even WebVTT subs. I don't even think that Chromium's built-in player supports subs.

Anonymous 2016/1/9/5:39:32 No.517095


>I've never got WebM subs to work in any browser, even WebVTT subs

They work fine if you have separate files and a HTML video>track wrapper, dunno about embedding them though.

Anonymous 2016/1/9/5:45:10 No.517099


Ahhhhh okay, I've just been embedding them in the WebM.

It should still work with embedding though.

Anonymous 2016/1/9/9:15:41 No.517211


The solution I chose is re-encoding for browser streaming. You get huge filesize gains (especially with libvpx9) and the subs show up just fine but it comes the price of (a little) quality loss and hardsubbing.

I'm imagining a niche for three-episode tests, where the convenience factor lets you try it out and you can download the top-tier (read: HorribleSubs 720p because nobody has good encodes these days) quality episodes from a torrent if you stick with it.

Anonymous 2016/1/9/9:52:7 No.517226



Did you try this? >>515415

I'm curious about it.

Anonymous 2016/1/9/10:10:12 No.517238


Oh no, I didn't. I'll have to re-add it and see if it makes a difference.

Anonymous 2016/1/9/12:44:22 No.517314


Its the same as a torrent. As long as someone keeps seeding it it will exist forever

Anonymous 2016/1/9/23:21:47 No.517633





Anonymous 2016/1/9/23:30:21 No.517647



Okay, the new trickle dag'd hash is /ipfs/QmVb23Ad9Q3nyyLdhzRpqxVuqUJkwecPGwFViKTmSF6dEp

Anonymous 2016/1/10/8:39:51 No.517997


Really? That'd be glorious. I'll check it out, thanks anon.

Anonymous 2016/1/12/7:44:24 No.519692

At this stage, how well does go-ipfs scale run on a server-level implementation? Would a site hosting all their content on IPFS, like Neocities for example, experience any significant delay/latency delivering pieces of a website? Do they just use it for archival (unless they specialize in IPFS storage, like Glop or

Anonymous 2016/1/12/11:57:7 No.519809

File: 1455271027751.png




>not ZeroNet

bad choice mate

This is the future >>519171

Anonymous 2016/1/12/12:29:37 No.519828

muh update


fug JIDF HQ says switch tactics. :--DDDDD

tho, it seems to require JS so IPFS will be simpler for pure file sharing (also mounting and all that).

also impressive for a meme even memer than ipfs (because "bitcoin crypto", JS), zeronet supports tor including tor-only.

Anonymous 2016/1/12/18:51:50 No.520074


Even people in that thread don't want to use it, bad choice I guess. That seems to have the same problem people have with freenet, sharing content you don't explicitly want (possibly illegal).

Anonymous 2016/1/12/19:47:56 No.520101


>TCP is obsolete let's use React.js!!!!!!!!!!!

Anonymous 2016/1/13/19:2:8 No.521022

Found something for you lot to do.

Anonymous 2016/1/14/1:3:10 No.521376


The best we can do now is to either convince them to host their files with IPFS, or just put up any book/paper we download from Libgen onto IPFS for at least partial redundancy if it ever somehow kicks the bucket. I don't think all of us can mirror such a large amount of content.

Anonymous 2016/1/14/14:33:16 No.521755

Bumping to save from spam.

Anonymous 2016/1/18/3:6:33 No.524702

Is it just me or does the ipfs daemon randomly stop working after many hours? I also put my computer to sleep every night, could it be a bug when it wakes back up?

Anonymous 2016/1/18/5:6:28 No.524785




this isn't /r/jquery

Anonymous 2016/1/19/3:4:32 No.525759


I've been running mine for days to weeks without issues. Does it give you some kind of error or can you just not connect to things after you wake up?

Anonymous 2016/1/19/4:12:28 No.525797


On my desktop it will randomly stop connecting to anything, even stuff on your local network. If you try 'ipfs stats bw' you'll at least see a speed at least in the sub-kilobyte/s range. When it's "dead" I see 0 kbps up and down. Then I kill it and restart it and it works just fine.

Just werks on my SBC, and that's the one I do all my hosting on, so it's really not a huge issue.

Anonymous 2016/1/19/4:29:10 No.525806


Anonymous 2016/1/20/15:55:54 No.527043

Where is the C++ implementation?

Anonymous 2016/1/21/1:53:40 No.527458


The specification has been out for ages. Go make it.

Anonymous 2016/1/21/19:21:2 No.527961


pretty easy, but there is a built in faggotry filter that will likely delete your animu

Anonymous 2016/1/28/1:35:29 No.532935



>If you have an anime you want added to the list, please send me an email with the link, title, and quality.

Yeah, this seems exactly what IPNS would be made for, considering that otherwise you won't be able to follow the page.


>Is their DMCA infrastructure a problem for the network, or just a problem when fetching from the web?

Like >>514781 points out, it's only a problem when fetching from the web.

I ended up (stupidly) posting a link to some Light Novel PDFs on a public site, using's gateway, and when I checked back later, it was blocked. But you can still access all the files from any other gateway, including a local one.

Basically there doesn't seem to be any kind of risk of being shut down by DMCA, unless a copyright holder is really aggressive and plans to go after all the peers.

Anonymous 2016/1/28/11:57:22 No.533197


There's a newer version on

Anonymous 2016/2/2/17:14:22 No.535229



That page has always had an IPNS link at the bottom of it, you can see it on both of those and when clicked it returns the latest page if the host is online or someone else is keeping the IPNS alive. I forget if IPNS keep alive has been implemented yet but it's certainly planned if not.

It's the "load latest tracker" link which points to


This practice is common on Freenet too I believe where people link to the version of the page they saw since it will always contain whatever it is you wanted to link however there's no compromise since you can always reach the latest version by clicking some link. It's a pretty good system imo, an HTTP analogy would probably be a page that hard linked to its domain name somewhere, if you save the html file locally and open it you may not have the domain name so you won't have a way to reach the latest version of the page but if you click the hardlink it will resolve if it can.

Anonymous 2016/2/3/1:27:46 No.535463

File: 1456961266612.png



>not Ethereum

Anonymous 2016/2/3/1:28:49 No.535464

File: 1456961330105.png



>not MaidSafe

Anonymous 2016/2/3/1:39:34 No.535473



> newfangled meme networks

> not freenet

Anonymous 2016/2/3/9:22:8 No.535637


>nobody can guess a hash of your file.

Until someone downloads it and sends the hash out to everyone looking for peers.

Just encrypt the file with a symmetric key first

Anonymous 2016/2/3/9:27:40 No.535638



>literally 90% of the nodes are FBI


Anonymous 2016/2/3/9:34:56 No.535642

Would it be possible to implement versioning and ownership/master options like you see in syncthing in this ?

the problem i see happening with this is that it's designed to be static so how would you avoid bloat without something like that

Anonymous 2016/2/3/15:9:12 No.535740


The only option at the moment is to use IPNS, but supposedly they're working on a system like you describe.

Anonymous 2016/2/6/4:41:6 No.537250

File: 1457232066168.jpg

found something neat about the http(s) gateways. they are all named after the planets of the solar system. also you can pull ur file from each one without letting wget gamble in the round robin. i used http only but you can do https with -no-certificate since wget doesn't like it otherwise.

imo this software would be best used on VPS or in a datacenter were upload isn't an issue, but cucks insist upon using it backwards, trying to use the swarm as a backend to the http gateways thinking it will be free hosting. smdh why bottleneck the data at 8 machines which are already overworked.

any news about browser add-ons that run the daemon? i heard there was javascript version out somewhere too. the dev talks about browser integration but i haven't seen anything yet. if you could put the client node in the browser it would save work from the gateways.

yes i know there's a localhost redirect add-on and one for detecting hashes on webpages, but this requires you to run ur client/server on ur local machine by hand. I know it would be stumbing the program to have client browser add-on but it would make it more like MEGA and such. surely with WebRTC and other cuckware it shouldn't be hard to setup a botnet as well.

>hey mane get this add-on and let me send you a file.

Anonymous 2016/2/6/6:11:24 No.537292

File: 1457237484404.png



Anonymous 2016/2/6/7:12:47 No.537307


hello reddit.

Anonymous 2016/2/11/22:59:31 No.541191

File: 1457729981421.jpg

>avoid duplicating files added to ipfs

>anarcat opened this Issue on Mar 6, 2015


This is like one of two things keeping IPFS away from not being meme software.

Anonymous 2016/2/11/23:8:3 No.541198


Not really, the common user doesn't want to share multi-GB files.

Anonymous 2016/2/11/23:23:51 No.541212


Would be a nice replacement for torrents, since it would be harder for a swarm to die. Also, season packs would be superfluous.

Anonymous 2016/2/11/23:34:37 No.541224

>still no i2p support

Anonymous 2016/2/11/23:40:52 No.541229



It'd be nice for large files, but not for files you are constantly editing. I believe a comment there mentioned it, but the reason it's like this is because people might end up moving or editing the original file, which would break the hash.

A solution might be to still move the file to the datastore and leave behind a link, but that only solves the "moving" issue, not the editing.

Anonymous 2016/2/12/1:7:29 No.541282


Maidsafe is SJW approved! Look at all the diversity on their website, isn't racemixing the most heartwarming thing ever to be forced down your throat? I hope it turns out to be a total scam, but as of now the coin is severely over-valued. As for ethereum, what can it do that counterparty can't? Check m8 altcoins.

Anonymous 2016/2/12/1:44:36 No.541316



IPFS could have downloadable waifu-bots and I still wouldn't use it until there's i2p support.

Anonymous 2016/2/12/3:3:22 No.541368

File: 1457744602469.jpg

I think IPFS and ZeroNet threads should be merged into one versus thread. Does anybody agree?

Anonymous 2016/2/12/3:42:39 No.541402


It's already usable for large files as a torrent replacement, but normalfags won't just magically start sharing gb+ size torrents they made themselves, unlike power-users. Even then, if you're seeding that shit, you gotta have lots of space anyway. It's obviously clearly a flaw, but as they noted, not that prioritary over ironing out issues with the protocol itself.

Anonymous 2016/2/12/16:51:55 No.541692

How resource intensive is the current implementation of ipfs? My server is a simple arm board with less than 200mb of spare ram.

Anonymous 2016/2/12/22:25:9 No.541828


Not very CPU intensive, but I think 200mb could be cutting it short, although you should try and see eitherway.

Anonymous 2016/2/13/4:4:0 No.542068


It's on the list and IPFS is still in alpha, surely it will be added when possible since they seem to be open to the idea. Honestly though shouldn't you be using an underlying file system that handles this anyway like ZFS?

Either way I'm also looking forward to that issue being closed, that functionality should be a part of it.



I don't know if it would be for everyone but the daemon uses ~113MB of memory on my machine with the latest master and the latest version of Go. Prior to Go 1.6 is was using ~200, I don't know if it was the runtime improvements or just coincidental with changes made in IPFs.

Anonymous 2016/2/13/4:32:6 No.542080


IPFS 0.3.11, compiled with go 1.5.1 here.

My IPFS daemon starts at around 50MB memory, but ends up working up to around 200MB after some time has passed.

Actually, right now I've mirrored, so the different likely lies there?

Anonymous 2016/2/13/4:43:35 No.542086


I'm not sure, I have a lot of content on my node and it used to go up to 200MB on 1.5 after a while but now it caps out in the 100's on 1.6. We're talking weeks of uptime too for both with some moderate downstream usage and high upstream usage.

They did improve the GC for Go 1.6 and said they're going to again for 1.7, maybe that's related.

Also I'm using the latest master which is version 0.4.0-dev so that could also be related as it's a big change in the IPFS codebase.

I do not recommend updating to 0.4 yet though simply because they say not to on github, I guess because of the repo and network differences they're telling people to wait, right now .3 can talk to .3 and .4 can talk to .4 but there's no cross talk yet so if you run .4 and try to grab content only on .3 then you won't be able to, in practice though I haven't had any issues myself, worst case scenario you tell the public gateway to grab the 3 content for you then request again on .4 since the gateway will have it and be hosting for both networks, then you're mirroring it for .4 after that.

Relevant link

Anonymous 2016/2/13/6:40:48 No.542130


From what I understand even if you change small parts of the file and not the whole thing, those unedited parts live on. The file hash supposedly is a map to block hashes. Overlapping blocks could just be seeded from the original or other files. If I am wrong, please point it out!

Anonymous 2016/2/13/11:5:54 No.542220


I don't feel like having to backup my backup hard drive to change the file structure just to upload my animu to IPFS, or dropping it on my desktop and eat up my whole home partition.

Anonymous 2016/2/13/17:18:23 No.542321


Found the inbred

Anonymous 2016/2/14/22:46:9 No.543138


It gets worse



IPFS is a good thing, but it's only working implementation is quite shitty.

Anonymous 2016/2/14/22:54:42 No.543143


3) (protocol fault) SEGMENTS ARE UNTYPED

4) BLOCKS DIRECTORY. IT'S LIKE THEY INTENTIONALLY DONE IT MOST WRONG WAY POSSIBLE (protip: replacing directory with 36218 files by directory with 36218 directories with 1 file is not beneficial under any file system)



Anonymous 2016/2/14/22:59:55 No.543146






Anonymous 2016/2/14/23:5:22 No.543151



Anonymous 2016/2/14/23:20:5 No.543158

So how's that i2p suppor-

Oh... nevermind...

Anonymous 2016/2/15/1:20:17 No.543250




So, should we give up on ipfs and use maidsafe instead?

Anonymous 2016/2/15/1:33:14 No.543256


Maidsafe has strong smell of overengineered vapourware.

IPFS is shit, but it's simple shit that works right now.

Anonymous 2016/2/15/2:8:22 No.543283


So good idea/blueprint, but shitty execution? They should hire competent programmers, probably wouldn't take them this long to make it the thing work as well.

Anonymous 2016/2/15/2:44:26 No.543303


I'd argue they're complementary. Zeronet is better for dynamic, mutable content while IPFS is better for archival and immutability. At least that's my naive understanding of both.

Anonymous 2016/2/15/2:53:5 No.543309



Outside of 'ipfs stats bw'?


>Why doesn't your alpha software work with my alpha software?


They should throw a Kikestarter together. They have the excitement going and they should ride the wave instead of letting it fizzle out by the time go-ipfs hits beta. Especially with the Winblows crowd. I heard it barely works on that.

Anonymous 2016/2/15/3:24:45 No.543330


Because it was promised and they still haven't implemented it.

Anonymous 2016/2/15/4:32:6 No.543371

File: 1458009126798.jpg


Aren't they planning on replacing the block store and leveldb with something else?

>no stats

How do you mean? There are file statstics, datastore stats, and they plan to add limitations (bandwidth caps, disk limits, etc.) so eventually there should be traffic statistics as well, there may be more I don't know about too but I don't look into that stuff, I just want an hourly bandwidth cap.


Why timeout instead of trying forever? The whole idea is that things are supposed to be reachable always so why not keep trying until they are reached?

If I go to get a file or resolve an IPNS name I don't want to return after a timeout, I want the command to either succeed or block until it succeeds. Implementing your own timeout around this should be doable if you need one but outside of some kind of failure/abort state when would you even want to timeout? That makes sense for other protocols where high reliability isn't expected but that's not the intent for IPFS. Maybe it's silly of me to think that way though.

Also IPNS isn't even finished yet so I'd have to give it a pass if it's not working well right now, they don't work 100% of the time now because only the owner can keep it alive, they're going to make it so other peers can keep your name alive without you being online but that's not in yet, once it is though it shouldn't ever fail so there'd be no need to timeout on it unless you're not connected which would probably return an error prior to making the call. I'm not sure though.


image related


Who's working on that anyway? Are i2p people working on the support or are IPFS people doing that? I don't know much about the work being done there.


The good thing is they don't have to, anyone can make an implementation however they want as long as it conforms to the spec. People can hate on the official Go version all they want but they don't have to use it. If someone really thinks they can do better they totally could and people could use their version while interoperating with everyone else.


To be fair a lot of stuff is promised and not implemented yet, that's very typical of pre-release software, time has to pass for people to actually write it.


>Especially with the Winblows crowd. I heard it barely works on that.

Works fine on my machine. The only issue with Windows is that it doesn't have fuse so there's no way to mount IPFS as a drive. Everything else should work though. Someone wrote a seperate program that mounts IPFS via Dokan but I have never once gotten Dokan to work with anything on any version of Windows. Maybe it works for some people but that doesn't work for me at all, it mounts it, I can go into the directory and then it crashes, the filesystem client though not IPFS. Probably has something to do with it being written in Delphi.

Anonymous 2016/2/16/9:25:10 No.544275

File: 1458113110430.jpg



Well, I went full retard with that

>things are supposed to be reachable always

It is far from being a case for IPFS design, which does not actively duplicate and spread data blocks - it's design is closer to bittorent (swarm downloading from seeds, data duplication is either explicit (ipfs pin) or opportunistic - seeding from cache), than to distributed data store (where data blocks are treated same way as DHT keys).

Even if we assume, that data itself is always available, it's still absurdly strong assumption, as it would require that client (from IPFS application to physical network) will never fall into unexpected state, which could hang forever.

>I want the command to either succeed or block until it succeeds

Are you sure? Even if it means blocking for 5 years? There is always some deadline, just sometimes it is implicit "until I get bored with it".

>but outside of some kind of failure/abort state when would you even want to timeout?

You wouldn't, but, on the other hand, you always want "some kind of failure/abort state" - stochastic "it might return at any moment" state is very shitty thing to work with. Especially if you want to make something high-availability.

>Implementing your own timeout around this should be doable

It is doable, but it's a feature you expect from anything that is not ad-hoc bash script. Also "kill $(ps ax | grep 'ipfs resolve' | sed 's/ *\([0-9]*\).*$/\1/')" is not the most efficient implementation, but you cannot do better unless you do it inside application.

On the other side, making "infinite blocking" is trivial - either by calling in a loop, or by setting timeout to 30 years.

Anonymous 2016/2/16/19:48:20 No.544535

Neat for thread archival, tried saving a dying thread with wget (with rewrite for ads -> localhost to remove annoying loading icon):


Anonymous 2016/2/18/12:11:43 No.545582


>which does not actively duplicate and spread data blocks

That is a planned optional feature and there is also a project by the same team "filecoin" that will allow people to generate and spend a currency used for distributed storage, like you could pay me 1 to host your file for a day then I could spend that to do the same with someone else, either directly or just a random set of peers. I'm interested to see what they do with it but I'm planning on just turning on the distributed option myself, I have the space and network to work with and don't mind.

The optional "free" system is presumably going to work like freenet or perfectdark where it just distributes data to peers that allow holding random content. I think there are details on github but I don't remember.

I understand that forcing distribution is great for the network but a lot of people dislike that stuff being on by default since it uses their disk and network on something they don't even know, they could be unknowingly redistributing illegal content and they don't want that.

>Are you sure?

I mean that's the thing with this, even if it's not assured it will always be reachable that's the intent, so if I'm designing something which utilizes IPFS I have that in mind, if I want timeouts and such I'd probably use another protocol, if I'm using IPFS I intend on maximize the reliability of the content that's expected to be received. In some event where we rely on a critical object we may have no choice no matter what protocol we use, if your program needs a file to act on before it can continue you'll either block or poll anyway.

Regardless I doubt it will remain that way, there must be plans to incorporate them later once everything is more finalized, get it working first then polish it up. I could be wrong though.

>You wouldn't, but, on the other hand, you always want "some kind of failure/abort state" - stochastic "it might return at any moment" state is very shitty thing to work with. Especially if you want to make something high-availability.

That's fair.

Maybe it's worth filing an issue about it to see what they think and if they'll fix it sooner rather than later.

I hope my English isn't too terrible this early in the morning.


Nice. I wonder if you could make a distributed archive this way by doing hash only requests on 8ch a lot. So like you maintain the front page and maybe some thread index that points to a hash of its state before it 404s but you don't host the content yourself just the hashes to it. Then if anyone else archives a thread or file via ipfs it would be reachable.

Maybe not the best idea but I kind of like the idea of an archive that only has threads that were manually choosen by other people to be saved and not just by the site owner.

At that point though I guess you'd just archive all the textual data and maybethe thumbnails while relying on other people in the network to host full images. Could be cool.

Anonymous 2016/2/21/0:13:17 No.547377



Why save homepage/frontpage and other bloat?

I've used this in the past to save without full-size images

wget -e robots=off -p -k


>At that point though I guess you'd just archive all the textual data and maybethe thumbnails while relying on other people in the network to host full images. Could be cool.

You would still need to download full images to generate the "full" hash tree

Anonymous 2016/2/21/1:7:10 No.547402


ima i2p bro and i wanna do ipfs's stuff but am too busy obsessing about nntpchan and other autisms to be useful ;~;

Anonymous 2016/2/21/1:11:9 No.547403

IPFS is and always will be a meme until it supports Tor and I2P.

Anonymous 2016/2/21/1:58:4 No.547442


>You would still need to download full images to generate the "full" hash tree

For sure but you wouldn't have to store it forever. A lot of archives do that today where they store images for some amount of time but they will 404 eventually. You could use the same system but not have to worry about storing it yourself permanently as long as someone else did, but you still get the benefit of it always being reachable even without storing it yourself.


Can't you use it right now with Tor? I thought someone was working on i2p support, I didn't get a response on that earlier in the thread.

Anonymous 2016/2/21/2:10:4 No.547454


nntpchan with ipfs for the images

Anonymous 2016/2/21/16:39:58 No.547756


>Why save homepage/frontpage and other bloat?

Mainly laziness, saving original images is easiest with just plain domain restricted depth of 1

Anonymous 2016/2/23/14:44:33 No.549065




Anonymous 2016/2/23/15:8:2 No.549078


sounds great, but how would I nuke CP?

Anonymous 2016/2/23/18:52:42 No.549140


IFPS does not spread content automatically like freenet, you only seed what you get yourself. So a blacklist of some sort propably

Anonymous 2016/2/23/19:21:5 No.549153


so to delete an attachment, just stop seeding it and remove the reference in the markup?

Anonymous 2016/2/23/19:21:18 No.549154

rather, "delete"

Anonymous 2016/2/23/19:42:28 No.549162


yes, its called "unpinning" in IPFS: If no node has the file pinned (or temporarily hosting it as a result of only downloading it) the file will not be available

Anonymous 2016/2/25/0:32:32 No.549965


>IFPS does not spread content automatically like freenet,

I think that's actually a big flaw. The way maidsafe plans to distribute data sounds like the most sane way: service providers get paid and thus have an incentive to stay online, and encrypted chunks are spread around the network so that nobody has plain pizza on their drive. As a bonus, users don't need to rely on a centralized portal to access data when they don't have the storage or daemon required to get data locally, since again, all resources are provided by people who get paid for it in a distributed, fair manner.

Anonymous 2016/2/25/0:35:13 No.549967


So should some brave soul offer up all his CP so we can hash it and add it to a blacklist?

Anonymous 2016/2/25/5:31:19 No.550082


Go is fine

Anonymous 2016/2/25/17:41:24 No.550381


it could easily be gotten around by changing a single bit in the file... we need a program to automatically look at images and decide if they are similar to known images, which is impossible because you would beed a database of the stuff which is a terrible idea in itself...

Anonymous 2016/2/25/18:48:56 No.550410


Compared to VB, maybe.

Anonymous 2016/2/25/18:49:22 No.550411


It's trivial to do. Don't talk when you don't know shit.

Anonymous 2016/2/25/19:30:16 No.550428

File: 1458927016329.webm

has anyone modified the Twister html-frontend to display ipfs content yet? I could really use an alternative to kodi.

Anonymous 2016/2/25/19:35:31 No.550429


Sure it'd be easy to implement something to compare images to a database of known images.

The issue is holding a comprehensive database of CP, and expecting not to have the aforementioned database shut down.

Anonymous 2016/2/26/0:22:54 No.550541


thank you, that was my point

Anonymous 2016/2/26/0:28:27 No.550544


new thought: a database of rough vectors containing the rough shape and colors of the original images. nothing illegal would be hosted, but instead rough outline could be used to detect the images (of course with a certain color tolerance to remove fine details. maybe with refinement it could be accurate and less error prone.

Anonymous 2016/2/26/0:36:42 No.550548


that's one of joshy boy's seekrit projects sssshhhhh don't tell

Anonymous 2016/2/26/1:45:52 No.550570


Anonymous 2016/2/26/1:49:30 No.550572


That's the idea behind filecoin, essentially.

They want to keep IPFS and filecoin completely separate, though, and just have filecoin work on top of IPFS.

I personally prefer this, although I'm not really that familiar with maidsafe.

Anonymous 2016/2/26/5:20:32 No.550664


You can also simply extract feature vectors from CP images.


No. You lose a fuckload of meaningful information and yet your shit's still recognizable and enforceable.

Anonymous 2016/2/26/6:26:11 No.550686


What if an image is processed just enough to be considered an illustration (as cartoon lolis isn't illegal in most parts) but still retains enough information to be compared to a photo?

Anonymous 2016/2/26/6:40:50 No.550694


Not only would it be worthy of a honorary PhD-winning paper, it is still illegal in most parts of the world (it's in few parts that it's not), and would obviously represent an unimaginably higher amount of effort to come up with.

Anonymous 2016/2/26/7:11:26 No.550706


This already exists, and they already have a huge database of most known CP

>PhotoDNA is primarily used in the prevention of child pornography, and works by computing a hash that represents an image. This hash is computed such that it is resistant to alterations in the image, including resizing and minor color alterations.[1] It works by converting the image to black and white, re-sizing it, breaking it into a grid, and looking at intensity gradients or edges.[15]

Anonymous 2016/2/26/7:20:59 No.550709


Twitter had a huge problem with CP before PhotoDNA was added.

Pre 2012 twitter, If you searched for all recent images like this ( ), you would stumble across CP every few minutes.

Anonymous 2016/2/26/8:0:20 No.550722


>Not only would it be worthy of a honorary PhD-winning paper

That hard, huh?

>it is still illegal in most parts of the world

The database only needs to be hosted somewhere where it is legal though.


Actually this seems like a good idea to use.

Anonymous 2016/2/26/18:1:54 No.550999


This is >>550664

(although they call it a hash here for plebs like you to understand better).

Beside ad-hoc methods like these, one can also simply pass an image through an autoencoder and extract the latent representation as a feature vector.

Anonymous 2016/3/5/20:18:44 No.558289

0.4.0 WHEN


Anonymous 2016/3/10/1:29:42 No.561805

Someone should make a youtube mirror using IPFS. It would be a normal site but would use youtube-dl to download the videos and use IPFS to store and serve them. That way the videos are much more resilient to censorship and people can save a local copy really easily while also contributing to keep it alive. There's already a js video player.


An IPFS image hosting site already exists. We could absorb most of the code so integrating it shouldn't be that hard.

Anonymous 2016/3/10/1:37:5 No.561813


Forgot to mention that the video site itself would serve the requested video for a set period of time (unless it's deemed very important) and then get deleted to make room for new videos. The older videos would hopefully still live on through individuals who downloaded and pinned them in IPFS.

Anonymous 2016/3/10/15:10:59 No.562264


Looks like there's been major improvements made in pretty much every area.

Anonymous 2016/3/11/15:8:15 No.563051


Any noteworthy improvements?

Anonymous 2016/3/11/15:45:9 No.563066


that is genius

Anonymous 2016/3/11/22:27:11 No.563333


They're listed in the post.


Optimizations on adding files

Lots of bugfixes

More modularity for developers.

Added a simple(er) interface for file operations

I meant to post this yesterday, hit submit and returned just now to see I didn't fill out the captcha. Nice.

Anonymous 2016/3/11/23:51:58 No.563395

File: 1460407918748.jpeg

>This release contains a breaking change to the network wire protocol in the form of a major refactor and upgrade to the libp2p handshake protocol. Because of the refactor, all IPFS daemons earlier than 0.4.0 will not be able to communicate with the newest version. It is strongly recommended that everyone running an IPFS node upgrades to the latest version as soon as possible, as these nodes will, after a certain time, no longer be able to communicate with the majority of the network until they are upgraded. There are instructions on how to update below.

Anonymous 2016/3/12/1:11:6 No.563465


I'm glad they're making significant changes even if it means breakage, I'd much rather they do it now than when it's too late (post alpha). If this thing is going to take off I want the reference implementation to be good since that's what most people will be using or basing off of. Get those important changes in while they can.

Anonymous 2016/3/13/15:57:2 No.564883


It feels a lot smoother, previously pinning certain things would cause it to randomly stall for forever, it all seems to have been fixed now. This is great.

Anonymous 2016/3/13/23:3:48 No.565144

Can't wait for ipld

Anonymous 2016/3/14/2:17:38 No.565303



Am I understanding this right, it seems like a metalink but with more capabilities and in json instead of xml.

The connotations of such a thing on top of IPFS is pretty interesting, distributed data structures on top of a distributed network tied together with a simple single standard format, that's pretty awesome.

Distributing a set of data with potential metadata bundled with it in a single link.

Is all that correct?

Anonymous 2016/3/14/22:38:24 No.565967


You're right, except they tend to prefer CBOR over JSON for their canonical format.

You can read more about this here:

Anonymous 2016/3/15/1:30:15 No.566056


Anonymous 2016/3/15/5:52:24 No.566207

File: 1460688745028-0.jpg

Here's my WWII stuff

Propaganda and Maps




My node will be up and down sporadically since I play video games online sometimes.

Anonymous 2016/3/18/3:26:35 No.568994


I've been running the daemon constantly since this post, the constant upload seems to be killing my ONT, I've never seen anything do that before but I'm pretty new to fiber optic networks and have never had this long of a constant upload stream before.

I have to actually reset the ONT not the router. I wonder if it's actually failing or if my ISP is kicking me off for suspected malware. Anyone else experiencing this? I'd like to keep hosting.

I wish I could inspect the logs on that box but I can't find any way to even access it, looks like only the router can talk with it.

Anonymous 2016/3/18/5:24:13 No.569117

Posting the first three episodes for this season's anime in this thread if you wish to peruse. If you have any of them or you wish to add some, please help out.


Anonymous 2016/3/18/19:21:28 No.569735


Problem with that is youtube-dl is nondeterministic with what formats it pulls down, unless you tell it to always grab the shit 360p VP8 encode Youtube does for older browsers.

Anonymous 2016/3/19/12:24:56 No.570650


For me youtube-dl defaults to 720p mp4, however you may explore the available options with the -F parameter and select with -f. Most of the time 720 and even 1080 webms are available, even though 1080 seems to be separated to 2 streams which need to be combined with little effort.

Anonymous 2016/3/19/15:13:57 No.570806

i2p is a meme

prove me wrong

Anonymous 2016/3/19/16:32:28 No.570873



Note that you can also do -f x+y in order to grab different audio and video streams.(Asuuming x is video and y is audio,or the other way around) youtube-dl will automaticly mux them using the ffmpeg from your PATH.

Anonymous 2016/3/20/14:12:43 No.571906


Couldn't you just grab the list of available formats and parse it? The userscript "YouTube Center" offers an option to download videos and in the settings you can pick the quality you want or tick a box for the highest available, I'm sure their API makes it possible to fetch and parse the quality for a given video. If that's possible you could just always mirror the highest, obviously you'd get various sizes since not all videos are 4k or whatever their resolution limit is but that shouldn't be a problem, just scale the video in the player.

Anonymous 2016/3/20/16:47:15 No.571997


>The Invisible Internet Project (I2P) is an overlay network and darknet that allows applications to send messages to each other pseudonymously and securely.

>is an overlay network and darknet

not meme

Anonymous 2016/3/24/14:26:13 No.576478

Vintage memes:


Anonymous 2016/3/26/16:51:55 No.578967


ipfs.js is now extraordinarily close to actually being usable. To be honest I never though they'd actually make it.

Anonymous 2016/3/26/17:14:3 No.578984


Unlike some other magic dust salesmen, the people behind IPFS seem to actually care about people using their product, this looks promising but i'll wait until it's actually there.

Once it hits, i'll seriously consider dropping WebTorrent for it.

Anonymous 2016/3/26/17:26:35 No.578999


A part of me feels it must be too good to be true, like there's no way we would be allowed to have nice things.

Anonymous 2016/3/27/8:24:47 No.579714

File: 1461734688012.jpg



This seems to be the same issue I'm having now.


Why are all American ISPs so terrible, this ONT is the shitty desktop model, it doesn't even have a battery backup, I'm not surprised it can't handle many peers at once. I'm going to see if there's anything they can do about it but first I have to put up with tech support phone hell and hope I get someone who can actually help me if at all.

Anonymous 2016/3/27/9:11:28 No.579733

IPFS 0.4.1 came out, mostly just bug fixes.

They also added a roadmap which looks very promising. Looking forward to the lower memory and bandwidth usage.

What I really want though is an option to use hardlinks or symlinks instead of having to copy files into a static directory. That's a major reason why I'm not sharing my terabytes of content with it right now.

Anonymous 2016/3/27/11:32:45 No.579782


>IPFS in something other than Go.

Holy cowtits. Now all it needs is to work on I2P and I'll cry tears of joy.

Anonymous 2016/3/27/22:20:52 No.580050

File: 1461784852909.png


>What I really want though is an option to use hardlinks or symlinks instead of having to copy files into a static directory.

They're working on a solution for this

>That's a major reason why I'm not sharing my terabytes of content with it right now.

Same, once it's done I'm adding my entire media store.


That has been the biggest problem for years and years, a lot of people say "this is possible, I know it is!" and that's it, they make a big promise that something can be done and that it will but they never produce anything. IPFS made a promise, a spec, and a working implementation all at once, no promises without a base, a real foundation to start from and use even before a proper release.


IPFS is birthed out of this frustration, there's no reason we can't have these nice things. The developer gave a talk stating how unoriginal this idea is, it's a bunch of old and existing concepts that people have talked about and even used for decades but nobody has sat down and weaved them together. IPFS is just an amalgamation of existing and proven concepts, it's just a matter of making them work together which will take some time and is anything but impossible. The state that the alpha is in right now is more than usable and is only improving, I've been waiting for this for too long and am excited to finally see something like this realized. There's a better slide than pic related but I didn't look too hard for it.



Was this addressed with this?

>allow promises (used in get, refs) to fail (@whyrusleeping)

Anonymous 2016/3/28/15:50:57 No.580548


When will IPFS reach the point where you can say "you can use it now and it is better than HTTP in every way"? I saw the roadmap, but don't understand it enough to know when IPFS can finally go mainstream.

Anonymous 2016/3/28/19:49:12 No.580725


Realistically, it probably won't be 'done' until early 2017 at the earliest.

You can use it already though, and it works pretty well. The real tipping point in my opinion will be when ipfs.js is mostly implemented. Which is to say, at that point it'll be in a domain I can do something with, I'm perfectly happy to go crazy throwing up sites and services on IPFS all over the place (I'm bursting with ideas for that) but so long as it requires Joe Average to install something it is not going mainstream. Really looking forward to being able to actually use the js port.

Anonymous 2016/3/28/22:37:11 No.580949

File: 1461872231304.jpeg


It's kind of hard to say. HTTP and IPFS can seem pretty similar on the surface (I enter a URI and it gives me files), but the way they go about doing that is so drastically different it is not always directly comparable. So, to simplify things, I like to think of it in terms of a few broad categories:

static content: IPFS already does this pretty well, although can be slow to find new or unpopular files. Still, it has the huge advantage of allowing anyone to help host any file, which in my opinion is a reasonable tradeoff, but the speed will also increase as they improve how they use bitswap.

dynamic content (single-author): By "single-author" I mean that the content is authored by the same person that publishes it. This is mostly IPNS territory, which in my experience has been flaky at best. The 0.4.0 release was supposed to have fixed a lot of the problems, but I haven't had a chance to personally play with it. If it did then that's a big step up to being on par with HTTP.

dynamic content (multi-author): This is the really interesting stuff, where a website acts as a sort of hub for other people to publish content to. This needs to be figured out more before IPFS ever goes "mainstream." The current idea, as I understand it, is to implement some sort of pub/sub system in IPFS (you can read a lot of the discussion about it here ), but really there are a billion and one other possibilities, including using IPFS in tandem with other networks like Ethereum. Obviously you could just use HTTP for this and IPFS for everything else, but that kind of ruins the point.

live content:' Basically just dynamic content, but very fast. IPFS is definitely not capable of this yet.

There are more, but once they have all of these things checked off, I think that's when I'll be able to say for certain "you can use it now and it is better than HTTP in every way," but it won't happen all at once. If anything, it'll probably happen in stages, both as it improves and as people find more and more uses for it. In fact, I'd say that it's already a pretty good contender to BitTorrent, especially for small files or collections (since everything is de-duplicated), as well as some smaller websites.

As far as the actual process of it going mainstream, the js implementation will be huge for that, as >>580725 said. The barrier to entry will be as low as being able to load a webpage.

The dark web has never looked so bright.

Anonymous 2016/3/28/23:21:30 No.581000


Thank you for that image, it was the one I wanted to post earlier.

>but the speed will also increase as they improve how they use bitswap.

In theory this should also improve with general popularity/adoption as well. More peers = more potential hosts which also means better distribution as well so that's good in terms of reliability(availability/uptime) as well as shorter paths/jumps although I'm not sure if that will help dramatically for most people, it certainly will in things like apartment complexes and dorms, fetching content without going over the internet, very fast.

>dynamic content

IPNS paired with IPRS is what I'm waiting for, essentially it lets other peers keep an IPNS record alive so the IPNS host doesn't have to be up, if the original IPNS host is down you ask the network what the latest IPFS hash the IPNS hash pointed to.

Which imo seems good enough to replace DNS for non-humans or even humans honestly, it's obviously not easy to remember an IPNS hash compared to a domain name but it is possible to link it out or bookmark, if the network keeps it alive that'd be just great. A highly reliable domain+highly reliable static content all without the original host once distributed.

>pub/sub, Ethereum

Also excited for this, Ethereum seems like a pretty crazy pair for this, IPFS does distributed static content well, Ethereum does distributed dynamic content well, how perfect. I'm really excited to see if someone does anything big with the 2, imagining a hostless dynamic application is ridiculous, I really want to see some good ones that are not just concepts/proofs.

>live content

There were some discussions about it on github but I forgot all the links, there were a bunch of ideas in the previous threads.

>js implementation

I would really like to see them put this on the gateways if possible, I haven't actually looked at any of the js stuff myself yet though.

I'm also looking forward to more network traversal stuff and message passing, message passing in particular is something I'm interested in (relaying traffic for peers who have direct connection issues).

What an exciting thing, I am beyond sick of the lack of reliability of content or the roundabout ways we use now to preserve/share/distribute things.

>lel here's your file

>url shortner with waits and ads

>linking to file locker with waits and ads

>piss poor speeds and paywalls

>not even reliable

Torrents are good but they have fragmentation issues with piece sizes, trackers, etc.

People have already pointed out package manager repos, they would benefit a lot from this, setting up mirrors has never been simpler, just clone the repo locally and you're set for the entire global network. That goes for anything really, want a mirror? Just mirror it on a machine with ipfs.

I could continue to gush over this, I need it, and it looks like we're going to actually get it.

Anonymous 2016/3/29/3:13:34 No.581197




Whatever they did in 0.4.1 seems to have fixed this, maybe it was holding connections forever on 0.4.0, I'm not sure.

Anonymous 2016/3/29/4:2:22 No.581220

ebuilds where

Anonymous 2016/3/29/5:16:11 No.581271


Google gave me this

Anonymous 2016/3/30/15:42:20 No.582958


>the communication between browser and machine nodes will happen through WebSockets+SPDY

Will it reuse the existing open tcp port in the daemon?

Shareaza was nice because it was able to run Gnutella/Gnutella2/ed2k/torrent on the same incoming port.

Anonymous 2016/3/30/15:49:23 No.582964


>things are supposed to be reachable always

>It is far from being a case for IPFS design

The Interplanetary Filesystem is supposed to be usable over deep space links with multiple light-hours of lag. Setting timeouts optimized for planet-local networking is dumb.

Anonymous 2016/3/30/16:0:25 No.582975


>That goes for anything really, want a mirror? Just mirror it on a machine with ipfs.

This is already sort of possible with static content but the memory used for an "ipfs pin add -r" can add up to dozens of gigabytes for larger collections as of 0.4.0.

Anonymous 2016/3/30/22:10:30 No.583194


>shits directories directly into /

>network I/O in src_compile instead of using golang eclasses

>never heard of repoman

Hell fucking no, maybe I'll look at this again in half a year

Anonymous 2016/4/1/5:11:45 No.583483


That's just the first result after a 2 second Google search. Why not just install Go and build it yourself until a maintainer makes an ebuild?

go get -d
cd $GOPATH/src/
make toolkit_upgrade
make install

Anonymous 2016/4/1/23:34:56 No.584168


Optimized version

go get -v -u -d && cd $GOPATH/src/ && make toolkit_upgrade install

Anonymous 2016/4/2/11:55:47 No.584552


Anon please.

Anonymous 2016/4/5/14:11:28 No.587013

File: 1462446689066.jpg

I'm excited.

Anonymous 2016/4/5/14:26:41 No.587017

File: 1462447601904.gif


Get hype.

Anonymous 2016/4/5/15:46:56 No.587042

so what is the anonymity situation with IPFS right now?

how long until RIAA assholes sit in on files to watch who downloads them like they do torrents?

Anonymous 2016/4/5/16:6:4 No.587046

idea for a textpunk ipfs project: IFPS man page viewer app.

Anonymous 2016/4/5/17:6:45 No.587063


You cant determine the contents of a file based on its hash. You could try and download every request you see on the network and try to analyze the files that way, but it is much more difficult to keep track of people than just plain bittorrent

Anonymous 2016/4/5/19:52:40 No.587131


your IP is fully visible (unless you're using a VPN or obscuring it some other way) just like a plain torrenter. They've said they're not settling on that as ideal, but it's still in alpha so what do you expect?


yeah but you can hash your own files and (IPFS relies on this) it'll be identical to the hash of all other copies of that on the system. So they could just find the IP addresses of everyone sharing material that matches the hash of their copyrights. In fact it might even be extremely economical for them to do this, rather than scouring public trackers for copies of their media.

>but the userbase right now is so small that it's a non-issue

>and by the time it's become more popular, hopefully some solution has already been implemented, or VPN has become more standard among file-sharers

Anonymous 2016/4/6/16:56:53 No.587750




I'm writing this portion of my post after I wrote the other ones because it came to mind later. If IPFS can be used as a replacement for something like Dropbox, Syncthing, etc. then can they actually fault people for sharing files with themselves? Does intent matter here? Like if I want to share a movie between all my machines using IPFS I am allowed to do that, if someone else knows the hash then they can also retrieve it from me but that's not my fault, right?


In the future I can see there being things that disrupt accurate monitoring, the simplest one I can think of is message passing, they can't just ask "who has this file?" because while I may not have it I may be able to reach someone who does and relay it from them, so my client could report that I can serve it to you but it doesn't necessarily mean I am hosting it.

It depends on how they impliment that feature though, if they make it distinct that you're relaying or hosting then you'd just have to make a modification so your client reports it's hosting nothing and relaying everything, I don't think they're able to punish you for just acting as a relay node, I think the same thing applies legally to tor and freenet but I could be wrong.

Outside of that you can use traditional things such as tor, a vpn, etc. eventually integrate i2p into it, maybe more things.


>So they could just find the IP addresses of everyone sharing material that matches the hash of their copyrights.

The problem here is it has to be an exact match, if I take a CD or bluray, etc. and rip the contents the resulting file is going to be somewhat unique, the exception to this is when people get premade files from something like itunes or Google play but you can't share those unmodified either since the metadata contains some kind of special data (userid, stuff like that), once you strip that out the file hash will change, they'd have to generate all the permutations a particular file could ever become. It could be fooled as easy as the r9k robot, just append data to the file somewhere

>lol RIAA blox

I see plenty of people do this with music tags already

>ripped by xyz



I wonder if something like this could be made too

>opentracker may mix in random IP address numbers for the purpose of plausible deniability.

Have some rogue IPFS client reporting that it has hash X and IP Y, that way when you poll the network you get a list of valid and invalid peers hosting the hash and you'd have to initiate the transfer on each to find out which are invalid but I don't think they can legally initiate a download but I could be wrong, I'm not familiar with copyright laws. I'm sure someone will find a way to spam chaff or disrupt monitoring operations one way or another.

Anonymous 2016/4/8/9:8:18 No.588867


New pull request thread. This is the most edge of my seat I've ever been while watching code develop on Shithub. If this works correctly, that opens up a whole new fucking world for IPFS.

Anonymous 2016/4/8/13:46:46 No.588960


>I'm writing this portion of my post after I wrote the other ones because it came to mind later. If IPFS can be used as a replacement for something like Dropbox, Syncthing, etc. then can they actually fault people for sharing files with themselves? Does intent matter here? Like if I want to share a movie between all my machines using IPFS I am allowed to do that, if someone else knows the hash then they can also retrieve it from me but that's not my fault, right?

I am sure the Jews will do whatever inconveniences you the most. Probably, that means it counts as copyright infringement.

Anonymous 2016/4/8/13:47:47 No.588961


>Outside of that you can use traditional things such as tor, a vpn, etc. eventually integrate i2p into it, maybe more things.

They have already announced that long-term they intend to add TOR and I2P support. See also their image >>580949

Anonymous 2016/4/8/15:7:55 No.588987

ANSI C implementation when? D:

Anonymous 2016/4/8/16:3:13 No.589016


js is somehow better than go?

Anonymous 2016/4/8/16:3:25 No.589018


wont that just be insecure and crashy?

Anonymous 2016/4/8/16:10:15 No.589021


Bugs come and go, but eventually are all fixed. Shit languages requiring a huge runtime and impeaching us from running it on super low end ARM boards won't.

Anonymous 2016/4/8/16:12:38 No.589024


>This is the most edge of my seat I've ever been while watching code develop on Shithub

It's incredibly exciting. A part of me keeps expecting something to collapse, the way things are going we'll have an entirely distributed web model soon, where individuals can go back to self-hosting their sites like the original plan was.

Anonymous 2016/4/8/16:38:17 No.589035

File: 1462714697547.png


GCC can statically compile go. Don't bash a language on one compiler.

Anonymous 2016/4/8/16:53:37 No.589044


>GCC can statically compile go

Including the runtime into the binary isn't the same as really compiling. Plus, garbage collection.

Anonymous 2016/4/8/16:55:20 No.589047


yes js is


Anonymous 2016/4/8/22:57:53 No.589241


I've been wanting to use this for a while, but everytime I try something's off. The biggest problem for me is that is slows all my network traffic to a crawl. As soon as I start the daemon it will predictably slow everything, even DNS resolving, to 30 sec long endeavors at best. I don't know what's causing this, because I'm monitoring my network card and there's not a lot of bandwidth usage (no more than 215 KiB/s).

I've also tried adding things, but for whatever reason it will never upload through the official resolver. (eg. I've put the 'With Open Gates' mp4 video up, but it will only work on the localhost and I can't reach it through I've been able to access files others have added just fine though.

What am I doing wrong? or is this just the result of alpha software?

Anonymous 2016/4/8/23:50:4 No.589258


>The only real compiling language is ASM


Anonymous 2016/4/9/2:4:1 No.589312


Asm isn't compiled, m8. It's assembled using an assembler.

Also, I indeed used the wrong word, but Go is still garbage collected shit with an enormous runtime.

Anonymous 2016/4/9/5:2:2 No.589392


Ok we agree.

Anonymous 2016/4/9/10:10:15 No.589486

File: 1462777816051.gif

OpenBazaar includes IPFS now. It's fucking happening.

Anonymous 2016/4/9/14:2:13 No.589545


>they can't just ask "who has this file?" because ... my client could report that I can serve it to you but it doesn't necessarily mean I am hosting it

isn't that more the way freenet works? I thought that you were only able to serve the files that you've either "pinned" (mounted) or that you're actively accessing. But I could be wrong idk

I think in the end the whole copyright infringement thing is a moot point, anyway, because those guys are only now figuring out how to track torrents -- how long will it take them to figure out what IPFS is, let alone do anything to combat it? [spoiler]and look how quickly even normalfags got around their torrent-monitoring[\spoiler]

re hashes changing with every file, isn't that something we want to avoid if we want a truly universal file system? eg, one giant "Movies" directory containing the hashes of all movies you can possible desire -- it makes sense to have a separate hashes for dvdrip, 720, 1080, etc. But it would defeat the purpose if there are twenty hashes for the same movie, varying only in the type of encoding or because the ripper has put some shitty subtitle intro at the start. If everyone somehow agreed on the "ideal" rips of each movie/song/whatever and decided to use only those (like bakabt/private tracker) then there might evolve something like netflix (standardized quality, on-demand, huge library) but for any kind of file, with a bigger library, free, decentralized, and completely generated by the users.

that's the kind of shit I dream about

also re live content, there seems to have been some pretty big leaps made with Orbit, the chat client, though that's obviously different to something like a smoothly-updating twitter feed

Anonymous 2016/4/10/3:39:25 No.590014


Can I get an explanation about what's good about this? Is it trying to make every file on there completely unique?

Anonymous 2016/4/10/14:57:30 No.590301


It's most likelly just your ISP fucking your shit up anon. I know it because mine does the same. Use VPN or host ipfs on server.


Think of IPFS as torrents (trackerless, DHT based), just usable for hosting websites and other stuff you want to stream or download quickly and sequentially on demand.

It's good because it can dramatically reduce bandwidth usage for peers hosting popular content (much better scalability) and also help minimizing effects of DoS attacks.

Anonymous 2016/4/10/22:16:35 No.590544


You should post about it on the issue tracker, the devs probably know how to get information they need to fix it. There's only one other issue I see about IPFS killing their network so it would probably be valuable for them to get this information on what's causing issues in rare cases.




>isn't that more the way freenet works? I thought that you were only able to serve the files that you've either "pinned" (mounted) or that you're actively accessing. But I could be wrong idk

That's correct, message passing isn't implemented yet and will likely be off by default when it is impemented.

There are talks about a share system like freenet but all of that is on top of IPFS and not part of the reference IPFS implementation. There's a couple of third party projects now that allow people to opt in as voluntary mirrors and the IPFS devs are going to make "filecoin" eventually which is like a seperate project for essentially renting IPFS peers who opt into their service, the peers hosting get a reward token for doing it which they can exchange in the same way for the same service. I like the idea, I can essentially loan out my bandwitdh when I'm not using it and spend the tokens to host files I care about when I do need my bandwidth.

>re hashes changing with every file, isn't that something we want to avoid if we...

Sort of, the really nice thing about IPFS is that it chunks content at a block level, if me and you have the same MP3 but different meta data for it we're both still hosting the audio portions, the same is true for other file types as well, so long as most of the parts are actually the same it shouldn't matter. Obviously though 100% of the hash you want has to be available so someone would have to be hosting the original metadata or have some convention of like 0 padding the end of a file, etc.

I hope people adopt a container-less video standard, imagine instead of grabbing mkvs you just point your media player to a directory hash which contains a video stream, an audio track, and a subtitle track, you could keep the video track, all the audio tracks, and all the subtitles in 1 directory hash and have the media player only fetch the ones you prefer using filenames like video.h264, en-audio.aac, en-subtitles.ass. Directories are technically a container format I suppose but I like the idea of splitting the streams like that for the above reasons, container-less streams seem much better (and more assured) for maximum distribution since everyone has to at least hold the same video stream so it would have a shitload of peers unlike with torrents where there's a lot of fragmentation of peers despite most of the data being the same. Like if someone uploads an mkv that has a video stream, an audio stream, and an English subtitle track, that's a whole seperate torrent and swarm than a torrent containing the same video and audio stream but a different subtitle set.

>eg, one giant "Movies" directory containing the hashes of all movies you can possible desire

The coolest thing imo too is that directory hashes are free so you can easily have several lists of the metadata in any kind of format you want without having to duplicate the data or symlink everything. On top of that you can mount hashes, even dynamic IPNS hashes, imagine just mounting the de-facto "movies" hash to ~/Movies, a constantly updated directory containing movies that you can just pic from whenever. High tier. You can already do this somewhat well now.

>Orbit, the chat client

Very impressive and interesting, I'm gonna look at this more later.

I hope all that I said makes sense, I should stop posting this late since I get rambly but I'm too excited to wait until a time I'm not tired, typing helps keep me awake when I need to stay up too.

If anything I said is incorrect please correct me, if I'm not being clear feel free to ask and I'll try my best.


As it is right now if you want to share a file on IPFS you need a copy of it to live in the blockstore, that patch is going to allow it so you don't need to duplicate it.

It resolves this:

It's a big deal because people will instantly be able to share massive amounts of data without needing 2x the storage.

If you mean IPFS itself I think this is a good article.

Anonymous 2016/4/10/22:50:13 No.590587

File: 1462909813695.jpg


That's good. I hope all these new distributed web technologies band together to become something more than meme software.

Anonymous 2016/4/10/23:20:19 No.590616

File: 1462911620425.png


>that BigchainDB presentation

My mind is blown and my cock is diamonds.

Anonymous 2016/4/12/18:44:56 No.593103


>Sort of, the really nice thing about IPFS is that it chunks content at a block level, if me and you have the same MP3 but different meta data for it we're both still hosting the audio portions, the same is true for other file types as well, so long as most of the parts are actually the same it shouldn't matter. Obviously though 100% of the hash you want has to be available so someone would have to be hosting the original metadata or have some convention of like 0 padding the end of a file, etc.

Just want to throw in that IPFS now uses a rabin chunker, so in theory it doesn't even matter if the metadata causes the same audio track to be offset somewhat.

Anonymous 2016/4/12/19:17:12 No.593117


>it makes sense to have a separate hashes for dvdrip, 720, 1080, etc. But it would defeat the purpose if there are twenty hashes for the same movie, varying only in the type of encoding or because the ripper has put some shitty subtitle intro at the start.

They'd converge around one gold standard without bullshit, the same as No-Intro for ripped games killed off all those cancerous chinese ROM sites that demand you use IE6.

Anonymous 2016/4/14/9:22:47 No.594928


It just keeps getting better and better, holy shit.

Anonymous 2016/4/14/16:36:24 No.595153

File: 1463232984145.png


the filecoin idea will hopefully work as an incentive for both people and larger organisations to act as nodes, by rewarding people who are able to prove that they can serve a file. It seems to be geared towards encouraging people to "seed" more neglected files, too (if it seems likely they will be requested more in the future). I'm looking forward to the days when memes futures are a legitimate commodity.

>miners are incentivized to acquire whichever pieces are covered by the fewest other nodes, since these have a significant chance of yielding a profit upon a future block-minting challenge.

I was looking further in to how this will work in more general society (since a lot of people will probably instinctively say "like the darknet? no thanks I don't wanna host cp"), and apparently ipfs has already received a bunch of dmca complaints -- because they're using their site as a node, they're responsible for content served. Their response has been to maintain a blacklist of dmca'd hashes, which they will not host and which anyone can opt into. It gets updated whenever they get served a new one. (But obviously you don't have to use it, and you'll still be able to access the content via other people who are ignoring it.) It seems like a pretty neat solution so that the file-sharers can keep doing their thing, while the more nervous/liable can just filter everything through that list and feel safe.

This basic technique would also work for any other kind of content-avoidance, right? So you could choose the degree and type of moderation that you're subject to, and the (now-manual) task of moderating mainstream forums/networks could be automated by making something like PhotoDNA >>550706 whereas those who don't mind wading through gore and shitposts can go bareback

>imagine instead of grabbing mkvs you just point your media player to a directory hash which contains a video stream, an audio track, and a subtitle track

I suppose then that directory would need some pointer to tell your browser/file manager what it contains, how to interpret it, but that's not a huge ask ... it would be really great if it resulted in meaningful standardization. Pirating wouldn't even have to grow to see huge improvements, if it was able to get its shit together and have everyone agree on the desirable files and seed together.

>a constantly updated directory containing movies that you can just pic from whenever

The obvious example is having a directory for a TV show which updates every time there's a new episode, or a youtube channel, or "dave's monthly tentacle porn roundup" but I'm thinking this would be also great for community-oriented filesharing. You would access content by following a variety of different "directories" which essentially correspond to your online communities. But now I really am getting ahead of myself...

Anonymous 2016/4/14/20:38:9 No.595258


I need to read up more about filecoin, that almost sounds like perfectdark, they have a sekai system that tries to distribute things by popularity, the weakest files usually get distributed to many at first but eventually die out from lack of popularity, if space is limited amongst the peers and the file isn't requested often it can essentially be bumped off to make room for newer less distributed files but it takes a while for it to drop down the list like that. At least I think it works that way, at the time I was reading about it, I couldn't find much English documentation.

>I suppose then that directory would need some pointer to tell your browser/file manager what it contains, how to interpret it

I think you could get away with just supporting directory hashes since the hash can be parsed to display its contents, then the player just needs to do things like parse the audio and subtitle names with preferences (so it picks the one you desire). So like, it sees a directory, searches for playable file types and either plays the first or displays a playlist. You could present a list that isn't just a 1:1 file picker if the player does something like assuming that "file.h264" and "file.en.aac" are meant to have a title of "File (English)", or something like that, but that may be stretching it a bit.

An example of what would be expected to be parsed would be the output of

ipfs ls QmVBEScm197eQiqgUpstf9baFAaEnhQCgzHKiXnkCoED2c

Format is hash:filename

You can also do

ipfs object get QmVBEScm197eQiqgUpstf9baFAaEnhQCgzHKiXnkCoED2c

to get the same contents in json, xml, protobuf, and maybe more in the future, that should be good for media players to parse and deal with.

I think there may be other ways to get information from a directory node but I forget since those 2 are the best looking ones, human and machine output.

Your idea may be better though, some kind of standard format that groups them with explicit metadata, maybe some json that's like

video { title:Movie Title, video:file.h264, audio{English:file.en.aac}, subtitles{...}... etc.

I'm not sure.


I'm a fan of that blacklist as well, it's a real nice option to have that satisfies everyone since it's optional.

>You would access content by following a variety of different "directories" which essentially correspond to your online communities

This reminds me of usenet and gopher. Pretty neat.

Anonymous 2016/4/15/2:55:33 No.595395



Statically linked binaries seem like they're advantageous for web applications where the server is chrooted to its own what Go is made for.

Anonymous 2016/4/17/5:15:31 No.596826

File: 1463451331281.jpg


>I imagine that this is going to take a while before it gets merged. So for now I think it will be best to maintain this as a separate fork while the goal of eventually merging everything in.

Anonymous 2016/4/17/5:48:33 No.596836


Did your mom drink while pregnant or did you become retarded after birth?

Anonymous 2016/4/17/5:50:3 No.596838

Could I use this to replace NFS in my home? NFS is a pain in the arse for me.

Anonymous 2016/4/17/7:31:26 No.596891


You could use it to send files across a local network with basically no configuration. Just get an ipfs daemon running on both computers. Problem is:

>pinning a file means you'll have to make a second copy of the file and undergo a CPU-intensive hashing process

>it might search the nodes looking for the file before it looks for it on the local network, so it may take a long time to find the file (they may have fixed this)

>it's available for everyone online, so private files are a no-go if the serving computer is internet-facing and you need those files available indefinitely (not that anyone would find it but whatever)

Not the best tool by any means, especially if you send a lot of big files. I prefer samba for serving files on a local network in the long term or scp/rsync if I need to send something over once.

Anonymous 2016/4/17/7:44:16 No.596900


It's mostly just movies that I found lying on the side of the road and made backups of with some .blend/GIMP and .c/python/php files scattered in for good measure. I've used samba before, it's just that I hate the idea of an smb implementation on my personal network. I feel like anything that started off from MS is going to have intrinsic problems with security. Thhen again, this is IPFS we're talking about here.

Anonymous 2016/4/17/7:48:19 No.596902


It can be used over a local network though, I think you just disable the default DHT bootstrap nodes. Don't quote me on that though.

Anonymous 2016/4/17/8:23:38 No.596922


>The obvious example is having a directory for a TV show which updates every time there's a new episode, or a youtube channel, or "dave's monthly tentacle porn roundup" but I'm thinking this would be also great for community-oriented filesharing. You would access content by following a variety of different "directories" which essentially correspond to your online communities. But now I really am getting ahead of myself...

This is already implemented & somewhat functional by using IPNS. cf. $(ipfs name publish --help) for details

Anonymous 2016/4/17/14:18:57 No.597055


The copy thing will be fixed whenever this is merged (apparently it's good enough for testing currently) >>588867

In my experience if I have a file locally and am connected to the network it will retrieve the file instantly, I'm not aware of any issue that prevented this before so I can't say if anything was ever fixed but it's been fine for me with media files and my media player as well as other random files and my web browser when nobody else has a copy of the file I'm requesting except me.

For the last one you can just not connect to the network for right now

You essentially create a private swarm and tell your node to try to connect to the other one(s).

It's not the perfect solution since someone could connect to you if they knew your address and your daemon was reachable, and they manually connected to you but it should work for now until they implement private networks which is a planned feature:


You do that and add the other node, unless mDNS finds it automatically which may work, I'm not sure. You can disable mDNS too if you don't want to be reachable by everyone on the local network.


This is still good news since it's all clientside, master doesn't need to merge it first for you to take advantage of it. Just grab it and compile it.


I'm really looking forward to pub sub, have they made it so you can have multiple IPNS hashes yet? That's an important one for me since I would want to separate various file sharing lists from say a web site.

Anonymous 2016/4/19/2:18:2 No.597989


OMFG the guy got offered 100 buck to at least start implementing Tor/I2P support and all he did was "LOL"?

What a faggot. I'm with you dude. IPFS Never.

Anonymous 2016/4/19/2:37:44 No.598004


TOR/I2p should just werk


Anonymous 2016/4/19/6:30:40 No.598182

File: 1463628644254.jpg

if I pin QmTmMhRv2nh889JfYBWXdxSvNS6zWnh4QFo4Q2knV7Ei2B will immediately start downloading and sharing the entire gentooman library?

Anonymous 2016/4/19/8:55:27 No.598251


Do you really expect them to create a 100% safe Tor implementation before they're even done with basic features? If it isn't ready for the clearnet, why should they take time out to make sure it works for a small amount of use cases?

Anonymous 2016/4/19/9:4:5 No.598255


how is torrenting bad for tor but this won't be?

Anonymous 2016/4/19/9:12:56 No.598256

File: 1463638377519.jpg


>mfw already DMCA'd it

>mfw is still untouched

I think just using pin will allow you to share it but not to have a local copy (outside of that clusterfuck of chunks that IPFS creates to share files). Use ipfs get to download the file and then ipfs add if I wanted to share it.

Anonymous 2016/4/19/9:20:23 No.598259


and that will automatically allow me to share to the same address?

Anonymous 2016/4/19/9:21:12 No.598260




Pinning something makes it so IPFS does not remove it when it does its garbage collection. If you have the content in the datastore already it just flags it to not be deleted, if you don't it will fetch it and flag it.

"get"ing something downloads it but does not flag it, "add" will add it and flag it to not be deleted.

So you just have to pin something if you want to mirror it, you don't have to manually download it then manually add it, pinning does that for you.


Just pin it it, or you can "get" it too and that will share it until the next garbage collection.

Anonymous 2016/4/19/9:24:15 No.598262


If you want to know the really wacky thing, I came across this library on google because I searched for one of the books in it. I can imagine why they would have found out about it when that starts to happen.

Anonymous 2016/4/19/9:31:45 No.598269



I should probably clarify, get will download chunks to the datastore locally AND output to a file/directory in the current directory.

Pinning will make sure there is a copy in the datastore locally that won't get deleted until you unpin it, you can access the chunks through your local gateway or even "get" them (even when the daemon isn't running or has 0 peers since you have a copy in your datastore that you can reach).

Add will take local files or directories and copy them to the local datastore then pin them.

As long as a hash is in your datastore you can access it locally even while offline, but you need the daemon running if you want to see them through your local gateway, but with get you don't need the daemon running.

If a file is in the datastore and you're daemon is running you're sharing it.

I hope I explained that well.

Anonymous 2016/4/19/9:39:10 No.598270


>OMFG the guy got offered 100 buck to at least start implementing Tor/I2P support and all he did was "LOL"?

The guy laughing is not a developer or project member, he's the one that's requesting the feature.

The project leader said they're going to work on it.

Anonymous 2016/4/19/10:0:31 No.598275

turns out x86 ipfs will not run on pi

this is racism

Anonymous 2016/4/19/10:2:23 No.598276


nvm there is actually an arm build available

was worried because I've literally never been able to build this kind of thing myself and get it to work

Anonymous 2016/4/19/15:25:53 No.598354

What is IPFS using all this bandwidth for while it's not even hosting anything?

Anonymous 2016/4/19/16:5:3 No.598375


If you just started propably to contact other nodes and DHT, not that much tbh but might still be enough to saturate a slow residential connection for a while.

Anonymous 2016/4/19/16:45:31 No.598384


>turns out x86 ipfs will not run on pi

>this is racism

Do they know about this issue? Is it covered in their CoC?

Anonymous 2016/4/19/16:53:24 No.598388


what do?

anon@anon ~ $ go get -d
package cannot download, $GOPATH not set. For more details see: go help gopath
anon@anon ~ $

Anonymous 2016/4/19/17:17:29 No.598398


You should set $GOPATH.

Anonymous 2016/4/19/17:59:44 No.598408

So, basically, gopher protocol with some fancy bittorrent distribution method?

What, never heard of gopher? Google it. You're welcome.

Anonymous 2016/4/19/18:37:41 No.598422


Try this

go help gopath


It covers more and has different ideals than gopher did. What similarities do you see besides their OSI level?

Anonymous 2016/4/20/1:5:10 No.598593


Because of the premature optimization meme, hopefully the resource usage will go down a lot once they put some effort into it.

Anonymous 2016/4/21/1:38:57 No.599063

How do you list all the people a file is being served by?

Anonymous 2016/4/21/1:48:35 No.599070


Anonymous 2016/4/21/2:5:53 No.599080


Java. And i2pd doesn't have a real torrent client.

Anonymous 2016/4/21/2:38:34 No.599107

Can IPFS already be used for hosting translations of Light Novels and Web Novels? Considering it can do static content really well right now I don't see that there would be any problem.

Also, I can't find a satisfying answer for this, but how would a site fare against DMCA C&D notices? Would it be really easy for people to ignore it and go to the site and read whatever they went there for?

Anonymous 2016/4/21/6:16:57 No.599257


ipfs dht findprovs <key>

where <key> is the file's hash. This will print the peer id's of all of the providers it can find for that file.

You can also do:

ipfs id <peerid>

for any particular id to get more info about them


I don't see why not, but be warned that it doesn't have anything built-in to actually help distribute the file, meaning that all "adding" something to ipfs does is add it to your node's cache and tell people that you're seeding it. If you go offline then no one will be able to access it until you come back online.

They do plan on making something that works on top of ipfs to accomplish this called "filecoin" ( ), but I'm not aware of any progress on it yet. There's also this , but I have no idea how well it works.

Really, though, I imagine you could get away with just adding the translations to ipfs (which also pins it by default), and then telling any friends, readers, etc. to help pin it with "ipfs pin <file hash>" (pinning, by the way, just tells ipfs to not delete the file from it's cache after a while, and pinning a file you don't have also downloads it). And even if you don't do that, ipfs could still help if your file is popular enough, as it will spread and get more seeders sharing it.

As for your second question, it depends on what you mean by a "site." There are a few ways that I know of to use ipfs for a website:

- IPFS as a Backend: This is basically the same as a regular website, but the server retrieves the files it needs through ipfs. This could potentially be useful for a big website with lots of content and lots of servers, but would be pointless for a small site. For a DMCA, the server would likely just block the DMCA'd file hash on ipfs (in fact, they already have a dmca blocklist for ipfs that you can apparently use . Great for small sites that don't want to get hassled for providing things over ipfs, but still completely optional). That said, the file could still exist on ipfs, and other people interested in it could manually pin and share it if they wanted to, but it wouldn't be accessible from the site.

- IPFS All the Way Down: Removing the need for a central server completely and hosting everything on ipfs. This is doable now with ipns, but in my experience it has issues (alpha software). If they ever get ipns to be reliable, though, this would probably be perfect for what you want to do. As for DMCA's, you would probably have to stop seeding any DMCA'd files on your node, but I don't believe you would have to delete the reference/link to it, which means other people could continue to pin/share the file and it would still show up as normal on the site. Main drawback of this method is that people would have to install ipfs to access the site, which the vast majority of people will not be willing to do (whether this is a good or bad thing depends on your point of view).

- HTTP + IPFS: What I mean by this is sort of combining the last two. The way you'd do this is serve the site like normal from a central server but have some javascript that detects if the person viewing the website also happens to have an ipfs daemon running in the background, and start using that to download content instead. This means that normal people can still view the site, but anyone with ipfs will also help distribute the content and reduce the load on your server. In terms of DMCA's, people that don't have an ipfs daemon running couldn't see blocked files, but people that do potentially could (if it's being seeded). They are also apparently nearly finished with the javascript implementation of ipfs ( ), which would mean your site could load things through ipfs whether or not they already have it installed and running, which is pretty exciting.

- Any/All of the Above + Distributed Tech of the Week: This means combining any of the above methods with other things like Ethereum or BigchainDB. There's really any number of ways that you could do this, and each have different pros/cons (to be honest, though, this is kind of where I think things are going in the long run). For avoiding DMCA's (or censorship of any kind) my rule of thumb is "the more distributed, the better," but anonymity can also play a factor, which is why a lot of people are interested in ipfs supporting Tor/I2P. You could look into these if your interested, but it's probably overkill for what you have in mind.

Sorry for the massive explanation, but there's a lot to talk about. Have some raps

Anonymous 2016/4/21/15:54:1 No.599413


So basically in layman's terms, you can circumvent DMCA in IPFS, but you have to have IPFS installed, otherwise it's the same as HTTP in that the DMCA works. If I understand there could be some more advanced ways to circumvent it without installing it, but it's more of a hassle and in some cases the tech is not ready yet.

As for the first answer, I mean we can already store the translations in already existing tech like epubs, pdf and seed them with torrents or just store them in regular storage sites. The fact that if you go offline the site goes down is already true for HTTP, it's just that we leave that all to some company that has servers online all the time. The fact that some generous fellow can seed the site with the translations that's on your node while you are offline is already a bonus.

Anonymous 2016/4/22/0:40:5 No.599650


DMCA is optional. The IPFS team would rather focus on building the software and not getting sued, so they comply with all notices on their nodes/gateway and let other people deal with yar-har-fiddle-dee-dee'd content. (see >>598256)

Anonymous 2016/4/22/20:2:28 No.600381


>suddenly a new security vulnerability in a Go library appears

good luck recompiling all applications again.

don't duplicate information. ever

Anonymous 2016/4/22/21:24:7 No.600417


>good luck recompiling all applications again.

go get -u ...

Fetches the source, recompiles, and installs all installed packages if they're new or changed. One of Go's talking points is its fast compiler so this shouldn't be a real issue.

Nothing prevents you from using dynamic linking if you really want it either so I don't get why people always bring up that the standard compiler uses static linking by default, gcc has had dynamic linking since the beginning and the standard compiler has had it for a while now too.

>don't duplicate information

This is a whole separate issue but I really feel like deduplication should be a concern for the filesystem and memory manager. This is a solved issue, ZFS has done block level deduplication for years and Linux has had same page merging in memory for a while too. Dynamic linking seems like a vestige left over from the age of non-optimal software and limited hardware resources.

I'd much rather a binary be reliable and work 100% of the time than anything else, not have to be worried about dependency conflicts or repository fuck ups, let the OS worry about what it's supposed to like managing resources efficiently, likewise with compilers.

I'm not saying there isn't a place for dynamic linking or that it's not useful but I really don't understand the stigma with static linking, especially not today.

Anonymous 2016/4/27/5:30:59 No.602704

>it dies when you try to add something bigger than 30 MB

Great P2P system.

Anonymous 2016/4/27/7:52:18 No.602750


No, you are just incompetent. It adds multigig files just fine

Anonymous 2016/4/27/9:11:22 No.602763


I'd rather not get into the whole dynamic vs static linking debate, but what you're saying only makes sense from the point of view of someone whose compiling shit on a powerful desktop machine.

There are many reasons you'd want dynamically linked libs, and space is only one of them, to me security is the big one, having to recompile all your software when there's a bug in your crypto implementation (ha ha) is a huge waste of time and ressources and if you forget some because of reasons, you're still vulnerable. Having more potential attack surface is bad even if it's not what true security is about.

Cross compilation is also a thing to take into account, there are some archs still used in production for which compiling is a long tedious affair and statically liking would even be impossible to maintain compatibility in certain scenarios.

That's not to say that static linking isn't the best choice in some scenarios, for instance all my forensics tools are statically linked for obvious reasons and i like to have static portable builds of my favorite software around.

But static by default is not a good idea, and while you do make a point about space, it's not the only issue here.

polite sage because off topic

Anonymous 2016/4/27/13:28:16 No.602801


>deduplication should be a concern for the filesystem and memory manager

You think RAM is cheap? Deduplication hash tables are huge.

Anonymous 2016/4/27/14:27:14 No.602812

> metrics "gx/ipfs/QmVL44QeoQDTYK8RVdpkyja7uYcK3WDNoBNHVLonf9YDtm/go-libp2p/p2p/metrics"

> Reporter metrics.Reporter

what is this bullshit?

Anonymous 2016/4/27/19:26:17 No.602896


That's besides the point, file deduplication should be the responsibility of the filesystem. Saying that we don't have a RAM optimal solution doesn't change this, it just means there's room for improvement in filesystems. Besides just because things like ZFS have RAM expensive dedupe doesn't mean it's the only way to do it. I don't remember if this is right or not but didn't BTRFS have some kind of passive dedupe method where it would essentially do dedupe passes at some interval? Maybe that was something else.


Sorry, I shouldn't have made a response to begin with on the static vs dynamic topic, I don't want to derail the thread but I was annoyed with people complaining about static linking. I understand the appeal and uses of dynamic linking but some people treat static linking like it's some kind of horrible thing that doesn't have its own uses. Forgive my tantrum.

Anonymous 2016/4/27/19:39:17 No.602904


Reading this file, the only data exported comes from this function

Which only seems to provide the number of connections.

Anonymous 2016/4/27/19:40:12 No.602905


And that's probably something you need to better balance the network.

Anonymous 2016/4/27/19:40:26 No.602906


Btrfs doesn't have live dedup yet. Dedup always needs huge hash tables because that's how it is.

Deduplication is best handled by compressors like lrzip for specific files.