--- Log opened Mon Jan 06 00:00:42 2014 00:00 <@MI6> libuv-master-gyp: #370 UNSTABLE windows-x64 (5/202) smartos-ia32 (3/203) smartos-x64 (3/203) windows-ia32 (5/202) http://jenkins.nodejs.org/job/libuv-master-gyp/370/ 03:28 -!- mode/#libuv [+o TooTallNate] by ChanServ 06:06 -!- mode/#libuv [+o TooTallNate] by ChanServ 06:42 <@MI6> nodejs-v0.10-windows: #420 UNSTABLE windows-x64 (11/608) windows-ia32 (12/608) http://jenkins.nodejs.org/job/nodejs-v0.10-windows/420/ 09:15 < rendar> https://github.com/joyent/libuv/blob/master/src/unix/process.c -- i cannot understand line 392, why a new child process should signal the father process with an EPIPE when it call exec*() to avoid that race condition, if the code between fork() and exec*() is completely controlled by libuv so the user can't throw a signal there? 09:34 <@indutny> well 09:34 <@indutny> parent process continue to execute after fork() 09:35 <@indutny> and child won't receive any signals 09:35 <@indutny> if it won't block 09:38 < rendar> indutny, hmmm, so that is needed only for letting the father waiting the child execution? so letting the father knows when the "limbo" state between child's fork() and exec*() is finished? 09:38 <@indutny> yep 09:43 < rendar> indutny, i got that, but why a pipe, and maybe not a piece of shared memory? 09:45 < rendar> indutny, e.g. a shared integer 0 and the child process set that integer to 1 before calling exec*() ? because in the time between setting that integer to 1 and calling exec*() there could be a race condition? 09:45 <@indutny> shared memory 09:45 <@indutny> meh 09:45 <@indutny> well 09:46 <@indutny> it won't be that atomic 09:47 < rendar> right 09:48 < rendar> indutny, well, that race condition is because of signals right? i mean, if one disables *all* signals from the parent process, and the child process inherit that, we don't need this pipe trick, am i right? 09:58 < felicity> also a pipe is much simpler than shared memory and i doubt this is a major performance bottleneck 10:01 <@indutny> it is not 10:01 <@indutny> rendar: it's just simpler than doing everything that you mentioned 10:01 <@indutny> and also atomic 10:04 < rendar> indutny, i see 10:04 < rendar> indutny, yeah right 10:05 < rendar> indutny, i was just meaning in a hypotetic case where signals are all blocked, we wouldn't need that because that is needed only for signals, right? 10:06 <@indutny> I think yes 10:49 <@MI6> nodejs-v0.10: #1696 UNSTABLE linux-x64 (5/608) osx-x64 (1/608) linux-ia32 (3/608) smartos-x64 (5/608) smartos-ia32 (5/608) osx-ia32 (1/608) http://jenkins.nodejs.org/job/nodejs-v0.10/1696/ 12:23 < roxlu> hey guys, I was wondering, when I use a uv_mutex_t with a uv_cond_t, can I still use that mutex to synchronize some other data besides the data I initialize used the mutex/cond for? 12:24 * roxlu hopes he makes any sense ^.^ 13:45 -!- mode/#libuv [+o piscisaureus] by ChanServ 14:04 < mmalecki> indutny: hey Fedor 14:05 < mmalecki> indutny: does your dht module support initial host discovery through any means? 14:05 <@indutny> hey man 14:05 <@indutny> nope 14:05 <@indutny> you should bootstrap it yourself 14:05 < mmalecki> what'd you recommend for bootstrapping it? 14:06 < mmalecki> I was thinking of using UDP multicast 14:06 <@indutny> hm... 14:06 <@indutny> I'd use some big list of nodes 14:06 <@indutny> with sparse ids 14:06 <@indutny> perhaps 40-80 14:06 <@indutny> or 14:07 <@indutny> centralized server 14:07 <@indutny> but perhaps UDP will work too 14:07 <@indutny> I mean multicat 14:07 <@indutny> multicast* 15:22 <@MI6> nodejs-master: #828 UNSTABLE smartos-x64 (6/692) smartos-ia32 (5/692) centos-ia32 (3/692) ubuntu-x64 (2/692) centos-x64 (1/692) http://jenkins.nodejs.org/job/nodejs-master/828/ 15:28 <@tjfontaine> indutny: hey 15:28 <@indutny> hey man 15:28 <@indutny> how are you? 15:28 <@tjfontaine> oh so it's the difference between execFile and exec 15:29 <@indutny> yep 15:29 <@tjfontaine> we should be delivering the cmd in that case as well though 15:29 <@indutny> ok 15:29 <@indutny> going to update openssl in node 15:29 <@indutny> to 1.0.1f 15:29 <@indutny> they have just released it 15:30 <@tjfontaine> sounds good 15:30 <@indutny> yeah 15:30 <@indutny> fixes some crashes 15:30 <@indutny> http://www.openssl.org/news/openssl-1.0.1-notes.html 15:31 <@indutny> already did it in bud 15:31 <@indutny> seems to be working fine :) 15:31 <@tjfontaine> heh, I trust openssl -- presuming you verify the source archive after their defacing incident :P 15:32 <@indutny> hahaha 15:32 <@indutny> yeah, I did 15:38 <@indutny> tjfontaine: https://github.com/joyent/node/pull/6812 15:38 <@indutny> tjfontaine: may I ask you to check if it builds on windows? 15:38 <@indutny> or will CI check it automatically? 15:39 <@tjfontaine> indutny: push it to a feature branch joyent/node and then the CI will run it on windows 15:39 <@indutny> ok 15:39 <@tjfontaine> PRs don't by default because it's asking to run arbitrary code on a windows box ;) 15:39 <@indutny> :) 15:39 <@indutny> ok 15:39 <@MI6> joyent/node: indutny created branch feature/update-openssl1.0.1f - http://git.io/xhegZA 15:39 <@indutny> also 15:39 <@indutny> are we interested in applying this 15:39 <@indutny> https://github.com/indutny/bud/commit/78866c73311cfb2b546ac4e924a807b3fe123850 15:39 <@indutny> that's google's patch 15:39 <@indutny> for TLS False Start 15:40 <@tjfontaine> hmm I'm not opposed to it, but I need to do a proper review when I get into work 15:40 <@indutny> oh gosh 15:40 <@indutny> I just noticed a problem with bud 15:40 <@indutny> one sec 15:41 <@indutny> with a false start 15:42 <@indutny> fixed 15:54 <@tjfontaine> mother fuckign windows 15:55 < swaj> hey indutny ! 15:56 < swaj> sorry I was away for Christmas break. 15:56 < swaj> going to test setEngine today :) 15:56 <@indutny> hey man 15:56 <@indutny> np 15:56 <@indutny> swaj: I think it should work with just an id now 15:56 < swaj> ok 15:56 <@indutny> swaj: setEngine('atalla') 15:56 < swaj> let me go clone and build master 15:56 <@indutny> sure 15:56 <@indutny> thank you 15:56 < swaj> and I'll run some tests 15:56 <@indutny> though, I'm going to groceries 15:56 < swaj> it's all good 15:56 < swaj> early here, so I'll be on for a while 16:04 < swaj> hmm 16:04 < swaj> having issues 16:04 < swaj> indutny: let me know when you're back from getting groceries and I can show you the logs 16:09 <@MI6> node-review: #139 FAILURE windows-x64 (17/692) centos-x64 (1/692) linux-ia32 (1/692) windows-ia32 (17/692) centos-ia32 (2/692) smartos-ia32 (8/692) smartos-x64 (8/692) osx-x64 (2/692) http://jenkins.nodejs.org/job/node-review/139/ 16:38 <@indutny> swaj: back 16:38 <@indutny> what's up? 16:58 <@tjfontaine> indutny: I'm going to trigger that -review build again, seems like it was probably a transient issue for that failure 16:58 <@indutny> щл 16:58 <@indutny> ok 16:59 <@tjfontaine> otherwise it's looking fine 17:22 <@MI6> node-review: #140 FAILURE windows-x64 (16/692) centos-x64 (2/692) linux-ia32 (2/692) windows-ia32 (17/692) centos-ia32 (2/692) smartos-ia32 (5/692) smartos-x64 (7/692) http://jenkins.nodejs.org/job/node-review/140/ 17:23 <@isaacs> tjfontaine: https://gitlab.com/ 17:23 <@isaacs> tjfontaine: a free-as-in-beer cloud-hosted open source alternative to github 17:23 <@tjfontaine> this is the github knock off right? 17:23 <@isaacs> yes. 17:24 <@isaacs> i'm going to give it a spin 17:24 <@tjfontaine> ya, I guess it's time to investigate it further 17:24 <@tjfontaine> please and thank you :) 17:27 -!- mode/#libuv [+o TooTallNate] by ChanServ 17:54 <@MI6> libuv-master: #418 UNSTABLE windows (4/202) smartos (3/203) http://jenkins.nodejs.org/job/libuv-master/418/ 18:04 < trevnorris> morning 18:13 < mmalecki> morning trevnorris 18:13 < trevnorris> morning 18:14 <@indutny> mmalecki: any luck? 18:14 < trevnorris> joy. today start all the interns. 18:14 <@indutny> with dht.js 18:14 <@indutny> trevnorris: at mzla? 18:14 < mmalecki> indutny: yes, I went with hard-coding my central server IP for now 18:14 <@indutny> nice 18:14 <@indutny> does it work? :D 18:14 <@indutny> haha 18:15 <@indutny> I remember testing it like year ago 18:15 <@indutny> but it should be working fine, I guess 18:15 < mmalecki> indutny: hopefully, I'm still stuck with some deployment stuff 18:15 < trevnorris> indutny: yup. 18:15 < mmalecki> designing new architectures is fun 18:15 < mmalecki> until it's not fun 18:15 < mmalecki> then you start drinking and it's fun again 18:16 <@MI6> libuv-node-integration: #373 UNSTABLE linux-x64 (3/692) smartos-ia32 (5/692) smartos-x64 (6/692) http://jenkins.nodejs.org/job/libuv-node-integration/373/ 18:19 < trevnorris> tjfontaine: ping about https://github.com/joyent/node/pull/6802 18:20 < trevnorris> indutny: you know why the close() API for dgram and net are different? i mean, you can pass a callback to .close() in net, but you have to set on('close') on dgram. what's up w/ that? 18:21 < trevnorris> indutny / piscisaureus: I'd like your feedback on https://github.com/joyent/node/pull/6802#issuecomment-31593382 and the subsequent two comments 18:24 < trevnorris> groundwater: have that link again for those tests? I want to cherry-pick those into my branch 18:26 <@piscisaureus> trevnorris: what is eloop? 18:26 < trevnorris> event loop i.e. uv_run 18:27 < groundwater> trevnorris https://github.com/jacobgroundwater/node/tree/ee-hooks 18:27 <@piscisaureus> ah 18:29 < trevnorris> groundwater: awesome. thanks. going to throw those onto the branch. thanks again for taking the time to create those tests. 18:30 <@piscisaureus> trevnorris: I'm okay with that change. I think we did the 'early' close callback this way so the libuv close callback woudn't need another c++ -> js roundtrip. 18:30 <@tjfontaine> it's just difficult to be able to know who we might break in their assumptions of end/close semantics, it's a fragile piece of node history that is dangerous to change, especially without a need today 18:30 <@tjfontaine> we've indicated our intention to close, I'm not sure there's a need to wait for libuv to tell us it has 18:30 <@piscisaureus> trevnorris: and the handle is "dead" after uv_close anyway 18:30 < trevnorris> tjfontaine: regardless, it's broken now. emitting in a nextTick means the eloop is essentially blocked. 18:31 <@piscisaureus> trevnorris: but if we made synchronous callbacks, that's bad! Was this actually the case? 18:31 < trevnorris> piscisaureus: that's fine. the ._handle is set to null as soon as close() is called. 18:31 < trevnorris> piscisaureus: well, we make a nextTick callback. so the eloop was blocked. 18:31 <@piscisaureus> trevnorris: but i'll leak into the close callback no? 18:31 < trevnorris> it just appeared to be async 18:31 <@piscisaureus> ah ok that's fine 18:31 <@piscisaureus> as what happens in lib/... 18:32 < trevnorris> tjfontaine: do you mean that end could fire before close w/ this change? 18:33 <@piscisaureus> trevnorris: euh ? I guess not, but not sure what you mean. 18:33 < trevnorris> tjfontaine: and beyond intent, I don't think the call should be in a nextTick. so imo it's a setImmediate() or this patch. 18:33 <@tjfontaine> I just haven't spent enough cycles on it, I'm just worried about changes in these semantics without knowing of an issue, we can totally change it to a setImmediate that's fine with me 18:34 <@piscisaureus> trevnorris: it seems to make the code actually simpler, so if it doesn't slow down stuff then I'm okay with it. 18:34 < trevnorris> tjfontaine: what i'm saying is that it's essentially the same thing, except this way we're using the libuv api properly. 18:34 <@piscisaureus> trevnorris: but maybe keep tjfontaine happy and postpone after-0.12 ? 18:34 <@tjfontaine> nah I think this can go into .12 just need to think about it some more 18:34 <@piscisaureus> maybe fork off 0.13 already? 18:35 <@tjfontaine> brb coffee 18:35 < trevnorris> tjfontaine: off the top of your head, what would you like me to test? 18:36 < trevnorris> piscisaureus: w/ the patch the callbacks aren't run until after uv__finish_close() has completed. it seemed like the correct place in the libuv api the callbacks should be made. 18:37 <@piscisaureus> trevnorris: well, I'm not necessarily super happy with excessive loop-phase strictness 18:37 <@piscisaureus> trevnorris: I think the user wouldn't notice anyway. 18:37 < trevnorris> piscisaureus: could you help me understand the win uv_run. it's not near the same. 18:38 <@piscisaureus> trevnorris: messy, also, didn't get the cleanup love that uv-unix got in the last 2 years 18:39 <@piscisaureus> trevnorris: what's the specific question? 18:39 <@piscisaureus> trevnorris: close callbacks (and some other types of callbacks too!) are invoked in uv_process_endgames 18:39 < trevnorris> piscisaureus: that's what I wanted to know. thanks :) 18:39 <@piscisaureus> trevnorris: but remember this: 18:40 <@piscisaureus> trevnorris: * between uv_close and the close callback can be multiple loop iterations on windows 18:40 <@piscisaureus> trevnorris: uv_process_engames may also call stuff like the read_cb sometimes 18:41 < trevnorris> piscisaureus: is that implementation details, or just how win works? 18:43 <@piscisaureus> trevnorris: the first is how win works, the 2nd is an implementation detail 18:43 < trevnorris> piscisaureus: also, I thought "close" meant that the stream won't accept anything else coming in. not that it's actually closed. 18:43 < trevnorris> ok, cool. 18:44 < trevnorris> oy, it's going to take me a while to get the feel for the logic flow for the win and unix side. 18:44 < trevnorris> piscisaureus: oh, also. have a guy at apple checking out the pwrite() issue. 18:45 <@piscisaureus> trevnorris: yay! was just about to ask 18:45 <@piscisaureus> trevnorris: you have established contact? 18:46 < trevnorris> piscisaureus: my friend that programs the kernel drivers is going to write a simplified test case for me, to make sure it's actually a bug. 18:46 <@piscisaureus> kewl! 18:46 < roxlu> hi guys, i'm using a uv_work_t and spawning some threads. I was wondering, how can I stop all threads that are running when I want to close my app ? 18:47 < trevnorris> piscisaureus: also, another question about that. so in uv__fs_write it checks if req->off < 0. but the uv_fs_t is only used w/ sendfile, which requires the memory to be mmap-able. which means off can't ever be less than 0. 18:47 < trevnorris> piscisaureus: I just must be missing something. 18:47 <@piscisaureus> trevnorris: as a mozillan - do you have to open a bug for every 2-liner that you want to submit? Or are there more informal ways to get minor stuff in? 18:48 < trevnorris> piscisaureus: like, for me personally or just in general? 18:48 <@piscisaureus> trevnorris: in general? 18:49 <@piscisaureus> trevnorris: this is for js so I won't ask you to land anything there? 18:49 <@piscisaureus> s/\?// 18:49 <@piscisaureus> trevnorris: uv_fs_t is also used for write, read, stat etc 18:50 <@piscisaureus> roxlu: thread pool threads? other threads? 18:50 < trevnorris> piscisaureus: guess my thought was a uv_fs_t would never write to a socket, so ->off would always be >= 0. 18:50 < roxlu> piscisaureus: I'm creating mutiple uv_work_t using uv_queue_work 18:51 <@piscisaureus> roxlu: you can't reliably cancel thread pool work. You'll have to wait until it completes, after that call uv_loop_delete() to join all worker threads. 18:51 <@piscisaureus> roxlu: if you want to exit quit-n-dirty just call exit() 18:52 < trevnorris> piscisaureus: I think you can submit a single bug as long as each change is in a distinct commit. 18:52 < roxlu> piscisaureus: ok thanks 18:52 < roxlu> strange thing is, that when a uv_work_t is active my d'tor is't even called 18:53 < roxlu> I was thinking to add a shared variable which I set to false when the workers need to stop 18:54 <@piscisaureus> roxlu: hmm. I think uv_loop_delete doesn't actually join threads... 18:54 < roxlu> piscisaureus: wierd thing is that I my call to uv_loop_delete() isn't even called 18:54 < roxlu> (I did create a new loop btw) 18:55 <@piscisaureus> https://github.com/joyent/libuv/blob/master/src/unix/threadpool.c#L132-L153 18:56 <@piscisaureus> I don't know how Ben intended this to be used ... 18:56 <@piscisaureus> There's a global thread pool so likely uv_loop_delete won't delete any threads. 19:00 < roxlu> hmmm interesting, it looks like it has to do with how GLFW and libuv work. I tell GLFW to close my windows when I press esc, this somehow makes the loop blocking (or it joins some threads) 19:02 < trevnorris> piscisaureus: you know if there's a win equivalent of man (2) splice ? 19:04 <@piscisaureus> trevnorris: there isn't. Is there a mac equivalent? 19:04 < trevnorris> piscisaureus: man page says it's linux specific. 19:04 < trevnorris> ... but that has nothing to do w/ mac 19:04 < trevnorris> let me check 19:05 < rendar> trevnorris, there is TransferFile or FileTransfer, something like that 19:05 < rendar> trevnorris, but that is more like sendfile() 19:06 < rendar> trevnorris, i think beside that one, there isn't one 19:06 < trevnorris> rendar: cool. thanks. 19:06 < rendar> yw 19:06 <@piscisaureus> TransmitFile / TransmitPackets 19:06 < rendar> yeah that one 19:07 < rendar> piscisaureus, but, is TransmitFile worth using? i mean, can it really help performance? i think it cannot even supports iocp, iirc 19:07 < trevnorris> and that works between any two fd's? 19:07 <@piscisaureus> rendar: I haven't benchmarked it so I wouldn't know. The api looks like it supports IOCP though. 19:09 < roxlu> hmm something else which is interesting, it seems that the threads aren't spawn directly after calling uv_queue_work 19:10 <@indutny> trevnorris: sorry was away 19:11 <@piscisaureus> roxlu: they are started on the fist invocation of uv_loop_init or uv_default_loop 19:11 <@indutny> trevnorris: is your question still relevant? 19:13 < roxlu> hmm can't find uv_loop_init() 19:13 < trevnorris> indutny: about splice? sure. mainly i'm just curious 19:13 <@indutny> splice? 19:13 < trevnorris> man (2) splice 19:14 < trevnorris> copy data between any two fd's and keep it in kernel space. 19:15 <@indutny> ah 19:16 <@indutny> I don't think it works on mac 19:16 <@indutny> let me check 19:16 <@indutny> oh 19:16 <@indutny> sendfile() 19:16 <@indutny> hm 19:17 <@indutny> well, that's not exactly it 19:18 < trevnorris> indutny: yeah. the reason I like the idea of splice is that it can be done between sockets and any other fd. 19:18 <@indutny> I know 19:18 < trevnorris> but sendfile the in_fd must support mmap. 19:18 <@indutny> I don't think that it is really feasible 19:18 <@indutny> to support it on non-linuxes 19:18 <@indutny> only if emulating it 19:19 < trevnorris> yeah. that's what I figured. oh well. 19:20 <@indutny> I think we had some plans for it 19:20 <@indutny> in ub 19:20 <@indutny> uv* 19:20 <@indutny> but it never worked out 19:20 < trevnorris> bummer 20:56 <@tjfontaine> trevnorris, indutny: I've had conversations in recent history, regarding this concept of what we could call require('stream').sendfile -- specifically relating to linux's concept of splice and what that would look like on smartos, and if we could get similar concepts for the bsd's and windows 20:58 < trevnorris> tjfontaine: why the ... does Socket#destroy() emit synchronously? man this API is inconsistent. 20:58 <@tjfontaine> trevnorris, indutny: basically it could be implemented in user land today, with an interface like uv_pump(uv_handle_t, uv_handle_t, size_t), such that you poll in/out on the first two and when both handles are ready you start read/write'ing until you get EAGAIN, or max buffer bytes per this iteration 20:58 < trevnorris> tjfontaine: and yeah. I like that idea. :) 20:59 <@tjfontaine> trevnorris: if you're interested in further conversation I can get you in contact with the people I've talked to about it with 20:59 < trevnorris> tjfontaine: i do like the idea, but that's definitely a v1.0 thing. :) 21:00 < trevnorris> tjfontaine: ok. and about the emit after _handle.close(). Socket#destroy already does this. the emit points are all over the place. 21:01 <@tjfontaine> my point is only that because we've been inconsistent for years that a path forward for consistency makes it difficult to achieve without also potentially becoming backwards incompatible 21:03 < trevnorris> ... 21:03 <@tjfontaine> frustrating right? 21:04 < trevnorris> it's all over the place. Socket#destroy() has the option of emitting the close event on the server, after it decrements the number of connections, but it doesn't check if there are still open connections on the server. 21:05 < trevnorris> and why the hell could Socket#destroy() be allowed to fire the close event for the server anyways? 21:05 < trevnorris> like, what. the. fuck. 21:06 <@tjfontaine> destroy is a very final mechanism though, it's like "no really we're going down hard" moment 21:06 < trevnorris> but why could destroy on a socket bring down the server? 21:07 <@tjfontaine> hmm? 21:07 <@tjfontaine> destroy on the server's socket you mean? 21:07 < trevnorris> tjfontaine: this: https://github.com/joyent/node/blob/master/lib/net.js#L468-L475 21:08 < trevnorris> tjfontaine: i got this after I made a change to not emit after the actual _handle.close() event was complete, and I started to receive two server close emits if I destroyed the socket 21:09 < trevnorris> *to not emit _until_ after 21:10 < trevnorris> ah crap. dumb ass self._connections check 21:10 < trevnorris> didn't see that. but seriously, wtf. any why is it happening synchronously? 21:11 < trevnorris> oh wait. it's technically not because it's wrapped in a nextTick... 21:11 < trevnorris> sorry. bad mood today. AL has been giving me shit so i'm taking it out on net. :P 21:12 <@tjfontaine> that's nice of you :P 21:20 <@tjfontaine> trevnorris: btw, what's our story for someone relying on the older domains mechanism for MakeCallback? 21:21 < trevnorris> tjfontaine: how do you mean? 21:21 <@tjfontaine> people who might have just constructed their own object with .domain attached to it 21:22 < trevnorris> um... they're SOL. 21:22 <@tjfontaine> sigh, is there a way to extend domain.add to do the right thing at least? 21:22 <@tjfontaine> instead of just hooking up 'error'? 21:23 < trevnorris> how do you mean "the right thing"? I do have AsyncWrap::{Add,Remove}AsyncListener to set all the proper flags in domain.add 21:24 <@tjfontaine> hmm, in my rudimentary test which does .add({}) and then passes through node::MakeCallback it's not being caught in the domain 21:25 <@tjfontaine> ah I see what may be the problem 21:25 < trevnorris> this is part of the reason for the EEO API. because people are bastardizing the EE and make it impossible for AL to properly handle all those cases. 21:26 < trevnorris> so EEO works sort of like a "fall back" that'll still allow all the domain stuff to get caught like it used to be. 21:26 < trevnorris> hell, we bastardize EE 21:26 <@tjfontaine> hmm, just trying to figure out what we can do for someone who might have been relying on domains without EEs being involved in their binary moduels 21:26 <@tjfontaine> granted this number may be small 21:27 < trevnorris> ok. so node::MakeCallback is not longer used in core. so we could add the check back for the "domain" object property. 21:27 < trevnorris> it'll make the call slower, but it won't affect core performance. 21:27 <@tjfontaine> right, it might be necessary for backwards compatibility 21:28 <@tjfontaine> I'll try and do a manta query across npm to see if I can find that information out 21:28 < trevnorris> tjfontaine: but how are they using it? like, are they just checking if process.domain is set? 21:29 <@tjfontaine> well, they don't necessarily have to, they're just passing a receiver object with a .domain attached 21:29 < trevnorris> because the EE no longer checks for this.domain before emitting an event. 21:29 <@tjfontaine> they could be attaching it in anyway they wanted 21:29 < trevnorris> ok 21:29 <@tjfontaine> not that we ever had a story abotu what that meant for addon authors already 21:31 < trevnorris> most likely othiym23 would have something to say about this 21:58 < othiym23> I think tjfontaine has the right idea, trevnorris. If manta coughs up something that an addon is relying on today, then we may have a problem 21:59 < othiym23> otherwise, who cares 21:59 < othiym23> I've always discouraged people from relying upon the implementation details of domains 21:59 < othiym23> hueniverse might care, though 21:59 < othiym23> his stuff is very hands-on in how it consumes domains 22:05 < trevnorris> othiym23: you're telling me :P 22:25 < othiym23> trevnorris: not really 22:25 < othiym23> I've never touched .domain on anything in any of my stuff 22:25 < othiym23> and does backwards compatibility get broken if nothing breaks? 22:30 < trevnorris> othiym23: heh, i meant more for the hapi domain tests. 22:33 < othiym23> well, shit, man, at least somebody's using domains 22:33 < othiym23> from my POV --- Log closed Tue Jan 07 00:00:48 2014