--- Log opened Fri Dec 14 00:00:13 2012 00:00 <@piscisaureus__> execSync 00:00 <@piscisaureus__> I totally forgot that we need this for 0.10 00:00 * piscisaureus__ freaks out 00:02 < CoverSlide> execSync? why? 00:12 < TooTallNate> cause shell scripting 'nd stuff 00:12 < deoxxa> execSync... wow 00:12 < deoxxa> that's going to be interesting 00:13 <@isaacs> piscisaureus: oh, right 00:13 <@isaacs> shit 00:13 <@isaacs> piscisaureus: we don't really *need* it for 0.10 00:13 <@isaacs> piscisaureus: the one pushing for it the most has moved onto go 00:13 < TooTallNate> isaacs: are we doing 0.9.4 this week? or next? 00:14 < TooTallNate> isaacs: felix? 00:14 <@isaacs> TooTallNate: we could do it tomorrow 00:14 <@isaacs> TooTallNate: yeah 00:14 <@isaacs> i'm kinda half-joking 00:14 <@isaacs> i mean, it'd be ag reat feature 00:14 <@isaacs> piscisaureus: but we can totally wait for 0.12 for that 00:14 <@isaacs> piscisaureus: i'd rather get what we have now into a good state, and release it 00:20 <@isaacs> Raynos, TooTallNate: new readable-stream pushed. 0.1.0 00:21 < TooTallNate> isaacs: :) 00:22 <@isaacs> i'm fixin to merge streams2 into master once we land that libuv update. 00:22 < TooTallNate> isaacs: so it's pretty cool using my own Readable (node-despotify) and my own Writable (node-speaker) and having .pipe() drive node :) 00:22 <@isaacs> piscisaureus: you recommend i merge ben's patch? 00:22 < TooTallNate> isaacs: in short, good fucking work on streams2 :) 00:22 <@isaacs> TooTallNate: thanks :) 00:24 <@piscisaureus> isaacs: ben's patch looks good to me 00:24 <@isaacs> k, i'm landing it 00:24 <@piscisaureus> isaacs: you don't have to merge it, but if you want to move forward it could be helpful :-) 00:24 <@isaacs> piscisaureus: did you land the libuv bit in libuv? 00:25 <@piscisaureus> isaacs: no 00:25 <@isaacs> want me to? 00:25 <@piscisaureus> isaacs: if you want to :-0 00:25 <@isaacs> i feel a bit weird putting a non-master version of libuv into node-master 00:25 <@isaacs> not sure why 00:25 <@piscisaureus> you are totally right 00:25 <@piscisaureus> don't do that 00:25 <@isaacs> k, i'll land it. 00:25 <@isaacs> admin rights ftw! 00:26 < MI6> joyent/libuv: Ben Noordhuis master * e079a99 : unix: fix event loop stall Fix a rather obscure bug where the event loop - http://git.io/yOwA8g 00:27 < MI6> joyent/node: Ben Noordhuis master * 6cf68ae : deps: upgrade libuv to e079a99 - http://git.io/gyzTiA 00:27 <@isaacs> ircretary: tell bnoordhuis Landed libuv-streams2-fix in libuv and node. Thanks! 00:27 < ircretary> isaacs: I'll be sure to tell bnoordhuis 00:28 < TooTallNate> isaacs: did the uv_work signature end up changing? 00:28 < travis-ci> [travis-ci] joyent/libuv#954 (master - e079a99 : Ben Noordhuis): The build passed. 00:28 < travis-ci> [travis-ci] Change view : https://github.com/joyent/libuv/compare/92fb84b751e1...e079a99abddb 00:28 < travis-ci> [travis-ci] Build details : http://travis-ci.org/joyent/libuv/builds/3655940 00:28 < TooTallNate> https://github.com/joyent/libuv/commit/e079a99abddb30a7f935792eda003b5ce37b396b#commitcomment-2305760 00:28 <@isaacs> TooTallNate: so, the after_work function takes an int "status" arg as well as the req 00:28 <@piscisaureus> isaacs: it doesn't really change that much 00:28 <@piscisaureus> isaacs: it'll just generate a warning 00:29 <@piscisaureus> people will be able to hide the warning by explicitly casting to uv_work_cb 00:30 < TooTallNate> piscisaureus: nice, i wasn't aware of this type 00:30 < TooTallNate> isaacs: did that change happen in a different commit? 00:32 <@isaacs> piscisaureus: yeah, if we have a simple workaround, that'll be good. 00:32 <@isaacs> piscisaureus: i mean, "explicitly casting" = "make a code change" 00:32 <@isaacs> piscisaureus: if they're going to change the code, they can also add the arg to their after function 00:33 <@piscisaureus> isaacs: yes 00:33 <@piscisaureus> isaacs: true ;-) 00:33 <@piscisaureus> isaacs: although then it will generate a warning on older versions of libuv 00:33 <@isaacs> true. 00:34 <@isaacs> so. i'm gonna merge in streams2 into master tomorrow. last objections? 00:34 <@isaacs> or, maybe right now. cuz why not. 00:35 < Raynos> isaacs: thanks 00:35 < Raynos> isaacs: have you done any work to make anything here ( https://gist.github.com/3791742 ) wrong? 00:38 < mmalecki> isaacs: only thing I can suggest is making abstract data types a first class citizen 00:39 <@isaacs> mmalecki: that'll come before 0.10 00:41 < mmalecki> okay 00:45 <@piscisaureus> isaacs: so did you test streasm2 on the glassy os? 00:46 <@isaacs> (merging v0.8 into master now) 00:46 <@isaacs> piscisaureus: not nearly as exhaustively 00:46 <@isaacs> piscisaureus: i'll do that prior to mergin in 00:46 <@piscisaureus> kewl 00:46 <@isaacs> you mean Moon Safari Wonky Donky WeirdOS, right? 00:49 <@isaacs> Raynos: I don't think you need that gist any more. The docs are basically that. 00:50 <@isaacs> Raynos: and i want to support synthetic streams before 0.10 goes out (mentioned to mmalecki) 00:50 <@isaacs> Raynos: as yo know, i think it's stupid, but whatever, people ask for that all the time 00:51 < Raynos> isaacs: I'll alot some time soon to port my abstractions to use _read and _write 00:51 < Raynos> then check whether I broke chain-stream :D 00:51 < Raynos> if chain-stream still works then it's probably ok 00:54 <@isaacs> Raynos: kewl 00:56 <@piscisaureus> isaacs: I mean chocolate factory os 00:56 <@isaacs> piscisaureus: chocolate factor? 00:56 <@piscisaureus> factory, even 00:56 < TooTallNate> isaacs: what will "support synthetic streams" entail? 00:56 <@piscisaureus> execSync execSync execSync 00:57 <@isaacs> TooTallNate: well, the arg to read() will be ignored, and it'll always return one thing 00:57 <@isaacs> TooTallNate: so "length" will just be the number of objects in thequeue 00:57 < TooTallNate> well… ok… 00:58 <@isaacs> yeah 00:58 <@isaacs> streams are for bytes. 00:58 <@isaacs> but... well... you know. 00:59 <@isaacs> "users" 00:59 <@isaacs> ruin everything 00:59 <@isaacs> ;P 00:59 < Raynos> EVERYTHING 00:59 < TooTallNate> idk, it seems like a better abstraction should be possible 00:59 < TooTallNate> (*cough* generators) 00:59 < Raynos> https://github.com/gozala/reducers 01:00 < Raynos> streams of objects is nice for things like json parsing 01:00 < TooTallNate> Raynos: i get your stance, i just don't agree with it 01:01 < Raynos> the main problem is that we have no alternative to pipe for non buffer stuff 01:01 < MI6> joyent/node: isaacs master * 77ed12f : Merge remote-tracking branch 'ry/v0.8' into master Conflicts: AUTHORS (+45 more commits) - http://git.io/Wrq-wg 01:01 < Raynos> if there was a nice pipe abstraction that left streams behind cleanly then awesome! 01:02 < TooTallNate> Raynos: but what's so bad about: JSONParser(readable)? 01:02 < TooTallNate> compared to readable.pipe(jsonParser) 01:02 < Raynos> what does it return? 01:02 < Raynos> does it return a non stream? 01:02 < TooTallNate> an event emitter 01:02 < Raynos> that kind of works 01:03 < Raynos> I actually don't mind readable streams as arguments 01:03 < TooTallNate> Raynos: re: backpressure, make the "object" event take a callback for when you're done with it 01:03 < Raynos> But JSONStringer(writable) feels bad 01:03 < Raynos> that doesnt even make sense 01:04 < Raynos> as for a callback for object event meh reinventing pipe 01:04 < TooTallNate> Raynos: i thought that's what you wanted!? 01:04 < Raynos> I dont know what I want! 01:06 <@isaacs> TooTallNate: the thing is, you want source.pipe(JSONParser()).pipe(DoesStuffWithobjects()).pipe(EncodesObjects()).pipe(destination) 01:06 < loladiro> piscisaureus: ping 01:06 <@isaacs> TooTallNate: so that you get backpressure th ewhole way down 01:07 < TooTallNate> ya i have problems with that but i digress 01:07 <@piscisaureus> loladiro: sup? 01:08 < Raynos> its not that pattern 01:08 < Raynos> Well meh 01:08 < Raynos> It doesnt matter :D 01:11 <@piscisaureus> loladiro: ? 01:47 < loladiro> piscisaureus: Oh, sorry I had accidentally turned off notifications in my IRC client and misses your reply. Are you still there? 01:47 <@piscisaureus> loladiro: I am 01:48 < loladiro> piscisaureus: great. I wanted to see what your plans were with regards to every using that new pipe API we had talked about a few months back. 01:48 <@piscisaureus> loladiro: ah yes that :-) 01:49 <@piscisaureus> loladiro: I still want to make it better but I was always distracted by more important stuff. 01:49 <@piscisaureus> loladiro: is there a specific problem you are running into at this time? 01:49 < loladiro> piscisaureus: Not at all we've been using it and it's working great. The only thing is all the distribution maintainers complaining to me that we have an incompatible libuv 01:50 <@piscisaureus> loladiro: aah right haha 01:50 <@piscisaureus> loladiro: you mean folks like sgallagh :-) 01:50 <@piscisaureus> loladiro: what is the patch you are floating now? 01:51 < loladiro> We basically haven't touched libuv since implementing the patch back when we last talked (the one whose windows version was in the pull request) 01:52 <@piscisaureus> loladiro: only that -> https://github.com/joyent/libuv/pull/451 ? 01:53 <@piscisaureus> oh right i remember 01:53 < loladiro> piscisaureus: Plus the implementation of that on the linux side and a few minor changes that we should really get rid of 01:53 <@piscisaureus> uv_pipe_close_sync made kittens die 01:53 < loladiro> piscisaureus: your idea, not mine ;) 01:54 <@piscisaureus> yeah 01:54 <@piscisaureus> I had no better ideas 01:54 <@piscisaureus> I would have to berate you about mixing tabs and spaces as well 01:54 <@piscisaureus> :-p 01:55 <@piscisaureus> I think some of this stuff actually went in 01:55 < loladiro> Oh, did it? 01:55 <@piscisaureus> like, configurable stdio endpoints 01:55 <@piscisaureus> but I don't thing we let you make spawn-safe pipes at this time 01:55 <@piscisaureus> so that's sort of shit 01:56 < loladiro> yeah, that's basically what we need 01:58 <@piscisaureus> hmm 01:59 <@piscisaureus> loladiro: I will take a look again. 02:00 <@piscisaureus> loladiro: that's all I can promise now 02:00 < loladiro> piscisaureus: Ok, thanks. Let me know if I can be of any help 02:00 <@piscisaureus> loladiro: so do these package maintainers really care at this point? 02:01 < loladiro> piscisaureus: Not really, I convinced them that it was a necessary evil, but then again it would be nice to have to maintain a fork 02:01 <@piscisaureus> yeah 02:01 <@piscisaureus> ok, that's good 02:01 <@piscisaureus> I mean they are not packaging up libuv because we don't do releases ;-) 02:01 <@piscisaureus> I have built up such a huge backlog since last summer 02:02 <@piscisaureus> because that was also one of these things that I was supposed to be doing 02:08 < txdv> lulllzzz 02:16 <@piscisaureus> ok i quit once again 02:17 <@piscisaureus> have a nice day everyone 04:27 < rvagg> I don't suppose anyone has a commit ref handy for the fd leak fix in Node 0.8.16? 04:27 < rvagg> oh, forget it... pretty obvious in the commit history! 09:10 < darkyen> Is it necessary to 09:10 < darkyen> clear slow/Fast buffers 09:10 < darkyen> explicitely if they originate from C++ ground ? 09:39 < darkyen> Anybody around ? 09:51 < bnoordhuis> morning 09:55 < darkyen> Good Morning 10:15 < abraxas> good morning 10:15 < abraxas> bnoordhuis: mind if I ask you a question about signals..? indutny suggested I ask you 10:15 < bnoordhuis> abraxas: sure 10:16 < abraxas> Is there a good reason why signal handlers don't reveal their source? (pid, etc) 10:16 < bnoordhuis> because there's no good way to emulate that on windows 10:16 < abraxas> I was afraid of that :) 10:17 < abraxas> thanks 10:18 < bnoordhuis> np 10:27 <@indutny> bnoordhuis: thats what I thought too :) 10:27 <@indutny> bnoordhuis: hi 10:27 <@indutny> howdy? 10:27 < bnoordhuis> indutny: hoya 10:28 < bnoordhuis> before you ask, i'll review your PR today :) 10:28 <@indutny> haha 10:28 <@indutny> ok 10:28 <@indutny> though I wasn't going to ask 10:53 < MI6> joyent/libuv: Ben Noordhuis master * a3b57dd : test, bench: remove unused includes (+2 more commits) - http://git.io/B29TWA 10:54 < travis-ci> [travis-ci] joyent/libuv#955 (master - a3b57dd : Ben Noordhuis): The build passed. 10:54 < travis-ci> [travis-ci] Change view : https://github.com/joyent/libuv/compare/e079a99abddb...a3b57dd5987c 10:54 < travis-ci> [travis-ci] Build details : http://travis-ci.org/joyent/libuv/builds/3661901 11:02 < MI6> joyent/libuv: Andrew Shaffer master * f5a2304 : sunos: properly disarm PORT_LOADED fsevent watcher Fixes a segmentation - http://git.io/IxwuHw 11:04 < travis-ci> [travis-ci] joyent/libuv#956 (master - f5a2304 : Andrew Shaffer): The build passed. 11:04 < travis-ci> [travis-ci] Change view : https://github.com/joyent/libuv/compare/a3b57dd5987c...f5a2304c9219 11:04 < travis-ci> [travis-ci] Build details : http://travis-ci.org/joyent/libuv/builds/3661977 11:10 < MI6> joyent/libuv: Ben Noordhuis master * c6c5b7a : Merge branch 'v0.8' - http://git.io/jfQoXQ 11:10 < MI6> joyent/libuv: Andrew Shaffer v0.8 * 4997738 : sunos: properly disarm PORT_LOADED fsevent watcher Fixes a segmentation - http://git.io/DC48rg 11:12 < travis-ci> [travis-ci] joyent/libuv#958 (v0.8 - 4997738 : Andrew Shaffer): The build was fixed. 11:12 < travis-ci> [travis-ci] Change view : https://github.com/joyent/libuv/compare/527a10f90428...49977386e93d 11:12 < travis-ci> [travis-ci] Build details : http://travis-ci.org/joyent/libuv/builds/3662068 11:12 < travis-ci> [travis-ci] joyent/libuv#957 (master - c6c5b7a : Ben Noordhuis): The build passed. 11:12 < travis-ci> [travis-ci] Change view : https://github.com/joyent/libuv/compare/f5a2304c9219...c6c5b7a901d2 11:12 < travis-ci> [travis-ci] Build details : http://travis-ci.org/joyent/libuv/builds/3662066 11:16 < MI6> joyent/libuv: bnoordhuis created tag node-v0.8.12 - http://git.io/78vb4g 11:16 < MI6> joyent/libuv: bnoordhuis created tag node-v0.8.15 - http://git.io/CBwu5A 11:16 < MI6> joyent/libuv: bnoordhuis created tag node-v0.9.3 - http://git.io/FSlrQw 11:20 <@indutny> finally 11:20 <@indutny> we've tags 16:12 <@piscisaureus_> hello kids 16:13 <@indutny> piscisaureus_: you know what 16:13 <@indutny> better ignore this :) 16:13 <@piscisaureus_> indutny: ? 16:15 <@indutny> piscisaureus_: you're troll 16:26 <@piscisaureus_> indutny: I cried softly 16:37 < Soarez> hello 16:39 < Soarez> is it ok to ThrowException() in the init() of a Node.js addon? 18:45 <@indutny> Soarez: that's not really good thing to do, but it won't break anything 18:45 <@indutny> as far as I can see 18:55 < MI6> joyent/node: isaacs streams2 * 6285056 : test: Update simple/test-fs-{write,read}-stream-err for streams2 Streams (+70 more commits) - http://git.io/h_X0Zw 18:57 <@isaacs> i really wish that make tracked files a bit more smartly than just looking at atime 18:58 <@isaacs> sucks when i accidentally check out a v8 change, then go back, and now have to rebuild the world. 18:59 < TooTallNate> isaacs: http://xkcd.com/303/ 19:16 < Soarez> indutny: thanks 19:19 < MI6> joyent/node: isaacs streams2 * 4791c32 : test: Update simple/test-fs-{write,read}-stream-err for streams2 Streams - http://git.io/SAgjKw 19:20 <@isaacs> ircretary: tell bnoordhuis Does this still strike you as a valid test? https://github.com/isaacs/node/commit/4791c3205d0b54753916d569202b54e90d86f8c9 19:20 < ircretary> isaacs: I'll be sure to tell bnoordhuis 19:35 <@isaacs> yoga, then merging streams2 into master. 20:00 <@indutny> :) 20:43 <@indutny> piscisaureus_awa: haha 20:43 <@indutny> piscisaureus_awa: yt? 20:43 <@indutny> pquerna: heya 20:43 <@piscisaureus_awa> indutny: ? 20:43 <@indutny> pquerna: yt? 20:43 < pquerna> indutny: hi 20:43 <@indutny> pquerna: oh goodness, you're here 20:43 <@indutny> glad to see you Paul 20:44 <@indutny> pquerna: it looks like I'm going to start adding "isolates" support to OpenSSL 20:44 <@indutny> pquerna: do you know if anyone already tried/did it? 20:44 <@indutny> by "isolates" I mean moving all static variables into object 20:45 <@indutny> which would be either thread-local or required-to-pass as a first argument to all api functions 20:45 < pquerna> no 20:45 < pquerna> its mostly in the crypto code right? 20:46 < pquerna> not the actual ssl code iirc 20:47 < pquerna> i'd actually look at the ciphers you use and see if they need it 20:47 < pquerna> is it showing up in profiling? 20:51 <@indutny> pquerna: right 20:51 < pquerna> like the actual tls code is already all local tot he session or context 20:51 <@indutny> well, there're a lot of locks 20:51 <@indutny> and disabling them helps a lot :) 20:51 <@indutny> pquerna: see https://github.com/indutny/tlsnappy/blob/master/src/tlsnappy.cc#L44 20:52 <@indutny> it doesn't seem to be an issue with 4 threads, but with 24 threads - wait time is visible 20:52 < pquerna> interesting approach 20:53 < pquerna> unsure you can do that for the session store? 20:53 < pquerna> unless you implement your own 20:53 < indutny_web> pquerna: sorry, irccloud is lagging agin 20:53 < indutny_web> pquerna: so look at this https://github.com/indutny/tlsnappy/blob/master/src/tlsnappy.cc#L44 20:53 < indutny_web> I've already disabled some locks in my application 20:53 < pquerna> https://gist.github.com/7a5e14d44fcf0bd95055 20:54 < pquerna> yeah 20:54 < indutny_web> because lock contention was visible on 24 thread machine 20:54 < pquerna> right 20:54 < indutny_web> ah, great 20:54 < indutny_web> well... I'm not using session storage right now 20:54 < pquerna> i just noticed its set to OFF :) 20:54 < indutny_web> yes 20:55 < pquerna> CRYPTO_LOCK_SSL_CTX is a risky one too 20:55 < indutny_web> so far it doesn't matter that much :) 20:55 < pquerna> things like random cert chain validation 20:55 < indutny_web> yeah, I know... but it seems to be working 20:55 < pquerna> i think use that 20:55 < pquerna> well, if you turn on client certs? 20:55 < pquerna> i guess if you don't modify the main context after startup 20:55 < pquerna> it might just work 20:55 < indutny_web> I'm working with context only from one thread 20:55 < indutny_web> i.e. each thread has it's own context 20:55 < pquerna> ah 20:56 < pquerna> how bad is the memory on each context? 20:56 < indutny_web> pretty interesting question 20:56 < indutny_web> I haven't measured it yet 20:57 < pquerna> the other one that could be interesting is the x509 store 20:57 < indutny_web> right now only speed matters for me, and I want to beat nginx :) 20:57 < pquerna> either implementing the interface so your internal storage is stable after config / lockless 20:57 < pquerna> or just turning them off if you never modify 20:57 < indutny_web> hm... yeah 20:57 < indutny_web> but why not just move everything to separate Isolates 20:57 < pquerna> how far off is it right now from nginx? 20:58 < pquerna> most of the locking isn't even for globals is it? 20:58 < indutny_web> 4200 vs 5000 20:58 < pquerna> its just for certain objects that are shared 20:58 < pquerna> at least on the ssl side 20:58 < indutny_web> pquerna: well, there're some globals 20:58 < indutny_web> and also 20:58 < indutny_web> locks are not cheap 20:58 < pquerna> the contxt, itss shared x509 store 20:58 < indutny_web> considering that they have memory barriers inside them 20:59 < indutny_web> hm... which reminds me about running it through cachegrind 20:59 < pquerna> another thing ot look at 20:59 < pquerna> we call like the openssl get_error thing... 21:02 < pquerna> ERR_get_state locks too 21:02 < pquerna> sigh 21:02 < pquerna> this is kinda sucky :-/ 21:02 < indutny_web> well, I'm not using it much 21:02 < indutny_web> actually, contention is visible, but not that high 21:02 < indutny_web> I'm just afraid of memory barriers which probably kills cpu pipeline 21:03 < pquerna> nginx at 5k, is that multi-proc? 21:03 < indutny_web> yes 21:03 < pquerna> or just threaded 21:03 < indutny_web> nginx is multiproc 21:06 <@indutny> hm... 21:06 <@indutny> oh shit 21:06 <@indutny> I was using only 16 processes for nginx 21:06 <@indutny> it's probably even more faster 21:07 <@indutny> palmface 21:07 <@indutny> pquerna: 6747 rps 21:07 < pquerna> have you played with pin'ing the processes to the cpus? 21:07 < pquerna> guess it'll depend on what machine you are doing it on quite a bit too 21:10 <@indutny> interesting 21:10 <@indutny> I think I've cap limit on that machine :) 21:10 <@indutny> I'm getting only 200 rps on tlsnappy now :) 21:10 <@indutny> hm... 21:11 <@indutny> 6700... 21:11 <@indutny> wtf 21:11 <@indutny> how is it that fast 21:11 <@indutny> I've profiled tlsnappy a lot 21:11 <@indutny> and the most of it's time it's doing BN_... stuff 21:12 <@indutny> ok, I think I need to dig into it a little bit more... 21:22 < yawnt> hai 21:28 < baz> Hi guys 21:28 <@indutny> hi 21:28 < baz> I have a really deep question to ask about libuv - and I have no idea how to proceed 21:28 < baz> Is this the correct place? 21:28 <@indutny> yes 21:28 <@indutny> the most correct one 21:28 < baz> okay, here goes... 21:29 < baz> okay, I am porting the scrypt key derivation library to node. In fact, its done. npm install scrypt will get it. Also, the github page explains a lit about it: https://github.com/barrysteyn/node-scrypt 21:29 < baz> The thing is, scrypt takes as input a time variables (it is a double, its unit is in seconds). Its security relies upon the fact that you have to take this amount of time to compute your answer. 21:29 < baz> When I put a lot of them up on the event queue (it is asynchronous), the functions always seem to be cut off and not allowed to finish. Which ruins the derivation function 21:29 < baz> I am not experienced enough with Node internals, so I don't really know how to proceed 21:29 < baz> Any suggestions? 21:30 < baz> BTW: I am experienced in crypto, so I can explain scrypt if anyone wants to know 21:30 <@piscisaureus> baz: so what do you mean with "put a lot of them up on the event queue" ? 21:30 <@piscisaureus> what libuv functions are you calling for that? 21:30 <@piscisaureus> baz: and "not allowed to finish" ? 21:30 < baz> Sorry, I may not be explaining myself correctly 21:30 <@piscisaureus> baz: they never complete, or they crash, or ? 21:32 < baz> The scrypt function has both an encrypt and a decrypt part. Lets say we choose a time of 2.0 seconds to perform encryption, then decryption with the resulting cipher MUST be 2.0 seconds as well. Anything less and it will return an error 21:32 * indutny signs off 21:32 <@indutny> ttyl 21:32 < baz> So with that in mind, look at this... 21:33 < baz> Assume that maxtime is set to 2.0 21:33 < baz> scrypt.encrypt(message, password, max_time, function(err, cipher) {scrypt.decrypt(cipher, password, max_time, function(err, msg) { if (err) console.log(err);});}); 21:33 < baz> The above will return a scrypt error after running it say 50 times 21:34 < baz> In my code, I am just doing the work by using uv_queue_work 21:34 <@piscisaureus> ah 21:34 < baz> So when I put lots of them on 21:35 < baz> It somehow does not allow some of the functions to finish??? At least this is what I suspect 21:35 <@piscisaureus> well, no 21:35 <@piscisaureus> libuv never interrupts work in the thread pool 21:36 <@piscisaureus> baz: so, the error is that it couldn't encrypt the data? 21:36 < baz> Can you suggest anything I can do (or do you know what the problem is). I am excited about booting Ruby from my job, but I need to have this working before I do 21:36 < baz> Actually, it could not decrypt the data 21:36 <@piscisaureus> ah right 21:37 <@piscisaureus> so how does scrypt know how much time it spends decrypting the data? 21:37 < baz> You have to give it a value 21:37 <@piscisaureus> baz: yes - ok 21:37 <@piscisaureus> baz: so my assumption would be that it expects a certain number of computations 21:37 < baz> For testing, I have hard coded these values (not on the release version) to make sure something like a rounding error was not involved 21:37 < baz> Yes 21:38 < baz> It is quite a beast 21:38 < baz> :) 21:38 <@piscisaureus> how many cores does your machine have? 21:39 < baz> two 21:39 < baz> It is my dev laptop 21:40 < baz> I am able to run this perfectly over a loop of 1000 in Python and Ruby 21:40 <@piscisaureus> baz: so what happens if you encrypt something for two seconds, while the cpu is otherwise unused 21:40 < baz> It works. 21:40 <@piscisaureus> baz: but then you decrypt it in the thread pool, so you might end up doing 4 decryptions in parallel 21:41 <@piscisaureus> so each decryptor gets only half of a cpu core 21:41 < baz> Why do you say 4 decryptions in parallel? 21:41 <@piscisaureus> baz: does scrypt have some trickery to deal with this? 21:41 <@piscisaureus> baz: well the uv_queue_work thread pool scales up to 4 threads typically 21:41 <@piscisaureus> (atleast in node 0.8 on unix) 21:42 < baz> Not that I know of. I know about the key derivation function well, but I do not know the internals inside out. But I don't think it has any trickery... 21:42 < baz> Okay, I did not know that 21:42 < baz> Thanks 21:42 <@piscisaureus> baz: so what you could try is just running one decryption at a time to figure out if that solves your problem 21:43 < baz> I have done that 21:43 < baz> Actually, what I did was more simple 21:43 < baz> I tried one encryption followed by a decryption 21:43 < baz> And it works - most of the time. But sometimes it does not. 21:43 < baz> That was when I decided to try it in a loop, and then it always failed some of the time 21:44 <@piscisaureus> hmm 21:44 < baz> I was thinking of launching separate threads instead of putting is in the event queue 21:44 < baz> Would that help? 21:44 <@piscisaureus> baz: you could do that 21:44 < baz> Then, some more questions 21:44 < baz> My threading knowledge is not the best 21:45 < baz> If I fire multiple threads, will the OS take care of when to run them. In other words, if there is not enough resources around, will the OS just delay things until there is, and then run the thread 21:45 < baz> So will the thread be guaranteed to run eventually? 21:46 <@piscisaureus> baz: but to be honest I have no clue what's going on. If scrypt actually requires exactly 2.0 seconds of uncontended CPU time, that sounds like a major design problem in the library to me. Would it not be an option to set maxTime much higher for the decryption process? 21:46 <@piscisaureus> baz: yes the os takes care of that. 21:46 <@piscisaureus> baz: but it doesnt do resource management for you 21:47 < baz> Most things in computers are good if they run fast. For key derivation, its different: The longer it takes, the more secure it is (thus brute force attacks are difficult) 21:48 <@piscisaureus> baz: the OS will make a core run your thread and interrupt it when it's time quantum is up (quantums are typically a couple milliseconds). Then it will run another thread on that core for a quantum, etc 21:48 < baz> If I have encrypt m, and it becomes c, during the encryption phase, if I set the maxtime to 2.0 seconds, then I must have a maxtime of 2.0 seconds during the decryption 21:48 <@piscisaureus> baz: so it is not possible to encrypt with maxTime 2 and decrypt with maxTime 10? 21:48 < baz> Maxtime is a variable, it can be set to 0.5, 1.0, 3.0. As long as the same maxtime is used for decryption as it is used in encryption, we are all good 21:49 < baz> Yes, that can be done 21:49 < baz> But it does not guarantee it will work, and it sounds like a bit of hack to me 21:50 < stagas> shouldn't it encrypt with a minTime and decrypt with a maxTime or no upper limit on the decryption? 21:50 < baz> When you say it does not do resource management, what do you mean by that (resources as in memory???) 21:50 <@piscisaureus> baz: yes, or file descriptors, or whatever 21:51 <@piscisaureus> baz: it just runs stuff "in parallel" 21:51 < baz> I will quote from the author of scrypt: maxtime - maximum amount of CPU time to spend computing the derived keys, * in seconds. This limit is only approximately enforced; the CPU * performance is estimated and parameter limits are chosen accordingly. * For the encryption functions, the parameters to the scrypt key derivation * function are chosen to make the key as strong as possible subject to the * specified limits; for the decryption 21:52 <@piscisaureus> baz: so what is scrypt the error you got? 21:52 < baz> What do I have to watch out for when I execute multiple threads if I have to manage resources. Also, I assume I can use uv_thread_create 21:53 < baz> The error I got was error 9: Not enough time for for decryption 21:53 <@piscisaureus> ah 21:53 < baz> Here is what the author says about decryption 21:53 < baz> for the decryption functions, the parameters used are * compared to the computed limits and an error is returned if decrypting * the data would take too much memory or CPU time. 21:53 <@piscisaureus> baz: so i have another suspicion 21:53 < baz> Okay, please do tell 21:54 <@piscisaureus> baz: do you properly copy stuff before moving it off the thread pool? 21:54 <@piscisaureus> baz: er, *onto the thread pool 21:54 < baz> Do you mean do I perform a deep copy? 21:54 < baz> And then do I release my data afterwards from the heap? 21:54 < baz> Is that what you mean? 21:54 <@piscisaureus> baz: because this -> https://github.com/barrysteyn/node-scrypt/blob/master/scrypt_node.cc#L193-L213 21:55 <@piscisaureus> baz: is wrong 21:55 < baz> Thanks for spotting this. To tell you the truth, seven days ago, I did not even know about how to do with this NodeJS 21:56 < baz> So if you could tell me what I am doing wrong, I would greatly appreciate it 21:56 < baz> I just followed an example to tell you the truth 21:57 < baz> Also, the error that I get is number 9 21:58 < stagas> baz: I think the error you're getting is normal 21:59 < stagas> baz: the security of scrypt seems to be that you are not allowed to run decryptions in parallel that consume more cpu / memory 21:59 < baz> Cool. I am just glad that you guys know what the error is 22:00 < stagas> baz: so you are kind of forced to run that slow decryption serially 22:00 <@piscisaureus> baz: String::Utf8Value makes a temporary "copy" of the string on the c heap, but that copy disappears at the end of the function. 22:00 < baz> Actually @stagas, I was giving you the wrong error. It was returning number 10 22:00 <@piscisaureus> baz: however you are just copying a pointer to that string into the baton object - you should copy the string itself 22:01 < baz> If CPU/Mem was an issue (as opposed to time) it would be a different error 22:01 <@piscisaureus> (because the pointer will not be valid) 22:01 < baz> Ahhhh 22:01 < baz> That makes sense 22:02 < baz> But the baton->message is std::string - I thought the = operator was overloaded to perform a deep copy? Am I wrong? 22:04 <@piscisaureus> you might be right - euh 22:04 <@piscisaureus> I never use std 22:04 < baz> I was hoping I was wrong - it would be the easiest fix :( 22:06 < baz> @stagas - if you are correct (you must run decryption serially), how do I accomplish that within the node framework? 22:07 < tjfontaine> setup a queue on the js side that handles dispatching 22:08 < baz> @tjfontaine - thanks. Can you point me to some documentation on how to do this? 22:11 < tjfontaine> baz: not really, a crude implementation would be to just change .encrypt and .decrypt to [].push() and then proxy the cb which would then check if there are more actions to be performed 22:12 < baz> @tjfontaine - hmmmm, I really need to read up on this stuff. is [].push a node internal command? 22:13 < tjfontaine> baz: no [] is a javascript array 22:13 < baz> Yeah, so push it on an array, and then pop it off the array when the time comes? 22:13 < baz> Okay, that makes sense. 22:13 < baz> But yikes, Ruby and Python will beat the living daylights out of this implementation then.... 22:14 < tjfontaine> not if it all has to be done serially 22:14 < baz> My dream of ousting Ruby and using Node will never fly in the in the office 22:14 < baz> But surely Python just uses multiple threads? 22:16 < baz> I think what I may be forced to do is to ask Colin Percival (the author of scrypt) for some help. He may ask me some questions regarding libuv that I may not know. Can I fire them back to this channel? 22:18 < stagas> baz: the queue won't scale also beyond one node process, so a cluster all trying to decrypt will fail again. I think you shouldn't use a queue and fail, then a higher level abstraction should take care of that 22:19 < stagas> baz: also are there any docs online for scrypt? can't seem to find anything beyond the pdf 22:19 < baz> stagas: my understanding is that using a queue will work, but it will make it as slow as hell 22:20 < baz> You can go to the tarsnap page, and you can see the author's comments there. Besides the paper, one also has the code. After that, we are on our own... 22:21 < stagas> baz: my understanding is that each decryption should be slow as maxtime or it will fail 22:21 < baz> At least as slow as maxtime 22:21 < baz> Yes 22:21 < baz> But I suspect (and I am no expert) that Python or Ruby will fire several threads that will deal with this 22:22 < baz> Because when I do a loop in the python implementation of Scrypt of 1000 loops, it works well 22:23 < stagas> baz: you do these in parallel? 22:25 < baz> stagas: No, I just run a loop. I am using time to test my assumption though that nothing happens behind the scenes... 22:25 < stagas> baz: and how long does that take 22:25 < baz> I am testing now... 22:26 < baz> Will put the results up soon 22:27 < txdv> piscisaureus: what do you think about uv_udp_dualstack(handle, enable) ? 22:30 < stagas> baz: anyway, I believe the expected behavior of scrypt is not to allow you to run multiple decryptions in parallel and the error is normal since you try to do 50 of them. that's like brute force, the thing it's trying to solve by using sequential key derivation functions 22:31 < stagas> baz: but better ask the author about it 22:31 < baz> stagas: After running it in python (doing 100 encrypts, and 100 decrypts - each with a maxtime of 1.0 sec), it takes 2m30.875s 22:32 < baz> I am going to ask the author about it, and will report back here 22:32 < baz> If you are interested that is :) 22:32 < stagas> baz: and it's also probably the reason it's not highly adopted, since if you have hundreds of users trying to login at the same time would keep them waiting for ages 22:32 < stagas> baz: though it's very secure 22:33 < baz> stagas: Actually, I believe it will be the future. Tarsnap has an incredible load, and it is able to handle things perfectly. I will get the answers from Dr Percival, and I will provide this community with this wonderful resource (if it is possible to do so). 22:38 < stagas> baz: try doing the same test in your lib in sequence I suspect the same result in time 22:39 < baz> stagas: okay :) 22:39 < baz> I will report back to you guys... --- Log closed Sat Dec 15 00:00:19 2012