#linuxcnc-devel | Logs for 2018-11-14

Back
[00:37:58] -!- Tom_L has quit [Ping timeout: 245 seconds]
[00:38:46] -!- c-log has quit [Ping timeout: 268 seconds]
[00:41:36] -!- c-log has joined #linuxcnc-devel
[01:35:29] -!- c-log has quit [Ping timeout: 244 seconds]
[01:37:06] -!- c-log has joined #linuxcnc-devel
[01:46:54] -!- ve7it has quit [Remote host closed the connection]
[01:52:32] -!- c-log has quit [Ping timeout: 276 seconds]
[01:55:10] -!- c-log has joined #linuxcnc-devel
[02:32:15] -!- JT-Shop2 has quit [Ping timeout: 252 seconds]
[02:33:07] -!- JT-Shop has quit [Ping timeout: 240 seconds]
[02:34:07] -!- jthornton has quit [Ping timeout: 240 seconds]
[02:34:18] -!- Tom_itx has joined #linuxcnc-devel
[02:43:35] -!- jthornton has joined #linuxcnc-devel
[02:43:35] -!- JT-Shop has joined #linuxcnc-devel
[03:20:00] -!- Balestrino has joined #linuxcnc-devel
[03:45:06] -!- selroc has joined #linuxcnc-devel
[03:45:20] -!- selroc has quit [Remote host closed the connection]
[04:03:30] -!- SB_ has joined #linuxcnc-devel
[04:05:47] -!- Balestrino has quit [Ping timeout: 276 seconds]
[04:08:06] -!- Balestrino has joined #linuxcnc-devel
[04:09:31] -!- SB_ has quit [Ping timeout: 272 seconds]
[04:27:15] <rmu> seems hm2_rpspi is using excessive read- and write memory barriers
[04:28:05] -!- JT-Shop2 has joined #linuxcnc-devel
[04:30:09] <rmu> I'm pretty sure that those are not needed iff running realtime stuff on one core with isolcpus
[04:52:51] -!- Balestrino has quit [Quit: Leaving]
[04:53:09] -!- Balestrino has joined #linuxcnc-devel
[05:12:56] -!- KimK has quit [Quit: Leaving]
[05:23:13] -!- KimK has joined #linuxcnc-devel
[06:03:54] <rmu> still debugging rpi stuff.. there is definitely something strange going on here. I see no reason why e.g. pid calculaction that usually has a thread time of around 600 can spike up to 35000
[06:04:43] <rmu> strange thing is that high function execution times line up, as if the whole CPU was slowed down for 0,5ms or so
[07:44:27] <jepler> https://github.com well this explains something I didn't understand before, why using setuid() in rtapi_app led to large latencies. glibc's setuid() wrapper actually interrupts all threads in the process in order to do its dirty work! Going more linux-specific, and using the relevant syscalls directly, might be able to work around it, but hurt portability.
[07:45:05] <jepler> sub-*processes* that open privileged file descriptors and pass them back is probably a long term and portable design, in case our freebsd porter ever comes back
[07:46:35] <jepler> rmu: interesting findings, does the CPU do thermal throttling or anything of that nature, that is documented?
[07:49:23] <rmu> jepler: i put a lot of effort into making sure no thermal or voltage throttling is happening, vcgencmd get_throttled reports 0x0 while this is happening
[07:50:14] <rmu> i tried different clock settings of "core" between 250 and 400 MHz and turned off frequency scaling
[07:52:20] <rmu> next thing i will try is doing some old-school gpio debugging with the oscilloscope, wiggling pins from within the hal components to see if they REALLY are slower or if the clock is doing funky things
[07:54:46] <rmu> i can get throttling to kick in wich "cpuburn-a53", but usually while running linuxcnc it is running at about 50°C
[08:01:37] <rmu> i think i will prepare my findings with some screenshots and open raspberry pi kernel issue or ask on the forum
[08:17:51] <rene_dev_> rmu does the rpi have a jtag interface?
[08:18:11] <Tom_itx> pretty sure that's how you program it
[08:18:24] <rene_dev_> try removing the memory barriers, and see what happens...
[08:18:39] <rene_dev_> no, its programmed by downloading stuff from the internet ;D
[08:24:44] <rene_dev_> hmm, looks like jtag tracing is not all that easy on a pi
[08:56:31] <mozmck> it seems that I can run the tests one at a time which failed and they pass. I got 15 that failed last time I tried and I had 15 segfaults in dmesg: halcmd[11249]: segfault at 7f0351f37000 ip 00007f03519053ab sp 00007ffd564e7ed0 error 4 in liblinuxcnchal.so.0
[08:57:08] <jepler> mozmck: what do you mean "one at a time"? linuxcnc's runtests runs the tests sequentially
[08:57:36] <mozmck> I also have a VM with ubuntu 18.04 (on this same machine) so I compiled master in that and I got only three failed because they could not find -lieee
[08:58:01] <mozmck> jepler: I ran the ones that failed like this: runtests tests/abs.0
[08:58:22] <mozmck> That runs only the named test
[09:10:00] <jepler> OK, when I read what you said I thought maybe you believed that "runtests" runs multiple tests concurrently. It doesn't.
[09:10:52] <jepler> It's possible you are seeing a problem that only manifests when your system is "under load", or only in a fraction of all runs... you could repeatedly run one test that you know fails in the hopes that it will eventually fail.
[09:10:53] <mozmck> Oh, no - I just went back a manually ran the tests that failed
[09:11:15] <jepler> this shell syntax will expand to "tests/abs.0" a bunch of times: runtests tests/abs.0{,,,}{,,,}{,,,}
[09:12:17] <mozmck> yep, failed 11 out of 64 runs
[09:12:41] <jepler> you are on linuxcnc master branch, or some other ref?
[09:12:52] <mozmck> master
[09:13:11] <mozmck> cleaned and rebuilt (several times now)
[09:13:49] <jepler> what OS?
[09:13:51] <mozmck> I'm running in linuxmint 17.3 (same basically as (x)ubuntu 14.04
[09:14:15] <jepler> uspace then?
[09:14:47] <mozmck> I've run 2.7 without issues for years on this OS, and just rebuilt and ran the runtests on it without any fails last night.
[09:15:06] <mozmck> yes, uspace - but not a preempt-rt kernel on this machine
[09:15:09] <jepler> and it's hardware you trust too?
[09:15:42] <jepler> > Runtest: 64 tests run, 64 successful, 0 failed + 0 expected
[09:15:44] <mozmck> Well, I have trusted it as my main devel machine for several years now - but anything will fail eventually
[09:15:57] <jepler> this is my debian 9 machine, uspace, no RT kernel
[09:16:16] -!- KimK has quit [Ping timeout: 264 seconds]
[09:16:27] <jepler> are the segfault 'ip' addresses consistent or different? (but they might be different due to ASLR too)
[09:16:41] <mozmck> I would expect that if I was having machine problems they would show up in other things - I run a lot of programs all the time.
[09:16:49] <jepler> I expect tests/abs.0 to be as reliable as anything
[09:17:25] <mozmck> ip addresses are all different (what is ASLR?)
[09:17:37] <jepler> address space layout randomization
[09:17:41] <mozmck> ah
[09:17:44] <jepler> are the last 3 characters in the ip consistent?
[09:17:49] <jepler> "3ab"
[09:17:55] <mozmck> yes
[09:18:37] <jepler> OK so that likely means it's the same code, just loaded at a different randomized address in each run
[09:19:20] <mozmck> I see. so how can I go about finding that code?
[09:19:27] <jepler> (ASLR is a mitigation that makes certain bugs harder to exploit, because the exploit can't depend on any of the program's code being at a fixed address)
[09:20:13] <mozmck> I've heard of it, but didn't recognize the acronym. Makes sense though.
[09:20:14] <jepler> ideally you'd be able to force your linux to give you a core dump file that you can do postmortem debugging on with gdb
[09:20:59] <jepler> but I .. don't even know how to do that these days
[09:21:03] <mozmck> Maybe I have a buggy library or something that is causing it.
[09:21:16] <mozmck> I'll see if I can find info on doing that.
[09:21:25] <jepler> you could disable ASLR, then the address should be the same each time
[09:22:03] <jepler> if that's the case, then you should be able to find out what function that specific address is inside using gdb on a non-crashed halcmd
[09:22:08] <mozmck> Any idea why halcompile on 18.04 might be giving the error: /usr/bin/x86_64-linux-gnu-ld: cannot find -lieee?
[09:23:04] <jepler> (For instance, start halcmd, find its pid, "gdb halcmd 12345", and then "x/i 0x00007f03519053ab"
[09:24:19] <jepler> I don't know why linuxcnc would be linking to -lieee but that library is provided by libc6-dev on my debian 9 system
[09:25:39] <jepler> by grepping the configure file I notice that -lieee ends up in TCL_LIBS because it is specified by tclConfig.sh but I wouldn't expect that to "get into" halcompile
[09:25:49] <mozmck> hah, so I did "echo 0 | sudo tee /proc/sys/kernel/randomize_va_space" thanks to askubuntu and now I have run the 64 tests 3 times without a failure.
[09:26:54] <jepler> well that's a stinker!
[09:27:08] <mozmck> Well, I have libc6-dev installed
[09:27:10] <mozmck> Yeah!
[09:28:01] <mozmck> maybe ASLR is buggy :-)
[09:28:10] <jepler> I have this on my system: kernel.randomize_va_space = 2
[09:28:51] <mozmck> Yeah, I think that is the default - full randomization
[09:29:01] -!- KimK has joined #linuxcnc-devel
[09:30:00] <jepler> OK, turn it back on and try to get a core dump then?
[09:30:36] <jepler> at least you can provoke a crash every few minutes, I was reading a "go" bug in which it seemed the bug was best reproduced with a 48-core machine and reproduced about once a day if you were lucky :-/
[09:30:38] <mozmck> I'll try in a few minutes. I'm running the full runtests now - so far no fails...
[09:30:45] <jepler> afk for a bit
[09:30:52] <mozmck> thanks
[09:34:12] <rmu> rene_dev_: I'm sure there is a JTAG somewhere in this SOC, but i'm not aware of accessible JTAG pins
[09:34:19] <rmu> rene_dev_: what do you have in mind?
[09:39:32] <rene_dev_> rmu https://www.segger.com
[09:39:45] <rene_dev_> but looks like that is no use for the pi
[09:44:15] <rmu> i used to work with lauterbach stuff, but in the end, those traces never helped that much.
[09:45:38] <rene_dev_> it would help if you know whats going on while its wasting time :D
[09:48:27] <rmu> situation is worse in 1ms cycle time vs. 0.333ms
[09:49:58] <rene_dev_> trigger a gpio in the usb irq
[09:50:16] <rene_dev_> its probably not all that easy :D
[09:50:26] <rmu> i don't believe cache or memory pressure alone is the culprit, because PID calculation sometimes spike from about 600ns to 35000ns
[09:51:19] <sync> ah you can jtag a rpi
[09:52:21] <sync> not sure if you can do etm trace on them
[09:52:55] <rmu> you surely can't exfil the complete instruction trace of 4 cores running at 1.4ghz
[09:53:39] <rmu> plus the evil stuff probably happens in the videocore and that does something evil on the bus (just a suspicion)
[09:54:30] <sync> well it supports etm
[09:55:17] <rene_dev_> I would just start disabling stuff, like usb, video, ... to see when stuff changes
[09:55:26] <rene_dev_> or just not use a rpi^^
[09:55:44] <rmu> the SPI read routine is reading the spi data register and busy waiting until transfer completes, so it is a small tight loop, transfer size also is small-ish, i don't see why that takes between 60000 and 250000 ns
[09:56:19] <rmu> yeah, i'm going to dump it. it seems to work somehow, but i don't like this ugliness.
[09:57:02] <rmu> running over X or on console doesn't make a difference
[09:59:09] <mozmck> well interesting. The full runtests ran with 0 failures with ASLR disabled. After enabling again, on the first run of 64 abs.0 - 16 failed
[10:00:52] <rene_dev_> Im looking, but its hard to find a real datasheet for the cpu
[10:02:58] <rene_dev_> who maintains the rt patch for the pi? could well be that they missed some stuff
[10:03:18] <rene_dev_> because rt preempt changes a lot of irq and lock behaviour
[10:07:52] <rmu> i don't think /proc/interrupts lies about what core services what IRQ
[10:09:02] <mozmck> jepler: so I enabled core dumps with ulimit -c unlimited, and gdb tell me the segfault is this:
[10:09:02] <mozmck> #0 test_and_set_bit (nr=0, addr=0x7f105bf17000) at rtapi/rtapi_bitops.h:48
[10:09:03] <mozmck> 48 unsigned long oldval = __sync_fetch_and_or(laddr + loff, 1lu << boff);
[10:11:09] <rmu> rene_dev_: don't bother looking for rpi datasheets, even if you found some, they would be incomplete or wrong
[10:13:19] <rene_dev_> rmu do you need servo interface, or step direction?
[10:14:33] -!- Balestrino has quit [Ping timeout: 252 seconds]
[10:29:55] <rmu> servo interface
[10:30:04] <rmu> analog servo
[10:32:26] <rmu> so running with 3khz would be possible and improve the jitter in the execution times, BUT smart serial is not fast enough... so that would need a hm2_7i90 read/write funct that only looks at the serve resp. only the smart serial stuff
[10:41:16] <rmu> man page of hm2_rpspi says "It should be noted that the Rpi3 must have an adequate 5V power supply and the power should be properly decoupled right on the 40-pin I/O header. At high speeds and noise on the supply, there is the possibility of noise throwing off the SoC's PLL(s), resulting in
[10:41:21] <rmu> strange behaviour."
[10:41:40] <rene_dev_> thats why I prefer ethernet
[10:43:21] <rene_dev_> which kernel do you use?
[10:44:47] <rmu> 4.14.74-rt44-v7-rmu+
[10:45:29] <rmu> just the rt-preempt branch from official rpf sources
[10:45:59] <rmu> i can make kernel packages available if somebody is interested
[10:46:51] <rene_dev_> this one? https://github.com
[10:46:56] <rmu> yes
[10:55:27] <rene_dev_> Showing 472 changed files with 17,474 additions and 5,183 deletions.
[10:55:34] -!- Balestrino has joined #linuxcnc-devel
[11:04:50] <rmu> AFAIK this is "just" the rt-preempt stuff rebased on the raspberry pi foundation kernel
[11:05:03] <rmu> or vice versa
[12:00:03] -!- Balestrino has quit [Read error: Connection reset by peer]
[12:24:46] <mozmck> So I see that the 2.7 rtapi_bitops.h file implements these functions using some assembly, but master uses compiler atomic functions. Seems that the old functions worked with ASLR enabled (on my particular setup), but the new ones randomly fail unless ASLR is disabled.
[12:27:34] <seb_kuzminsky> mozmck: that's surprising to me
[12:28:24] -!- ve7it has joined #linuxcnc-devel
[12:29:53] <mozmck> I see that the __sync_ functions are deprecated as of gcc 4.8, but I wouldn't think that would matter in this way.
[12:31:36] <mozmck> my gcc is 4.8.4 and kernel is the ubuntu 4.4.0-138-generic
[12:36:28] <rene_dev_> seb_kuzminsky did you see my question yesterday?
[12:36:40] <seb_kuzminsky> err, probably not, i'll read back
[12:36:57] <rene_dev_> 02:13 <rene_dev_> seb_kuzminsky whats #5400 or #_current_tool on startup?
[12:36:57] <rene_dev_> 02:14 <rene_dev_> I know what it is, but Im looking for documentation backing that up.
[12:37:00] <seb_kuzminsky> oh, i see it
[12:37:13] <seb_kuzminsky> i don't know the answer to that question off the top of my head
[12:37:35] <rene_dev_> so the behaviout is that its -1, until you do the first m61 or toolchange
[12:37:44] <rene_dev_> which indicates that it doesnt know what tool is loaded
[12:37:47] <seb_kuzminsky> i'd guess it's T0 on nonrandom machines, and whatever the tool table file says is in P0 on random toolchangers
[12:38:06] <seb_kuzminsky> T-1? ok, i guess that makes sense
[12:39:05] <rene_dev_> ideally I would like to make 5400 presistent, so it reloads the old tool on startup
[12:39:11] <rene_dev_> because that is how hardware behaves :D
[12:39:35] <rene_dev_> but I dont know if that should go into 2.7...
[12:39:57] <JT-Shop> you could change tools while the machine is off on many machines
[12:40:13] <seb_kuzminsky> i agree with JT
[12:40:18] <rene_dev_> you can also do that while the machine is running
[12:40:39] <seb_kuzminsky> i agree with that too :-)
[12:40:53] <rene_dev_> the machine never knows for sure what is in the spindle/turret
[12:41:09] <rene_dev_> the question is what it should expect to be in the spindle/turret
[12:41:27] <rene_dev_> IMHO it should expect to have the last tool
[12:41:43] <rene_dev_> gmoccapy has that implemented in the UI, where it reloads the last tool on startup
[12:41:47] <seb_kuzminsky> i think the current behavior is fine, and i don't want to change it in 2.7. If you feel strongly that it should remember #5400 in master i will not object
[12:41:57] <rene_dev_> ok
[12:42:00] <seb_kuzminsky> doing it in the UI seems like a mistake to me
[12:42:04] <rene_dev_> maybe make it configurable
[12:42:07] <rene_dev_> yes
[12:42:09] <mozmck> What is the effect of it not remembering?
[12:42:28] <rene_dev_> it is optional, but it doesnt belong there
[12:42:43] <rene_dev_> when you turn on the machine, you need to tell it what tool is in
[12:42:45] <seb_kuzminsky> i'm opposed to making it configurable - we should pick the best option and stick with it. we have too many knobs already
[12:42:47] <rene_dev_> and thats annoying
[12:42:47] <JT-Shop> I think it's a bad plan to assume the same tool in the spindle between running linuxcnc
[12:43:18] * JT-Shop goes back to chicken winterizing
[12:43:19] <seb_kuzminsky> it's sure safer to forget what tool's in the spindle and make the user remind us with m61 or m6
[12:43:36] <rene_dev_> why? on atc machines you usually cant change tools while the machine is off
[12:43:44] <mozmck> so what does it do if you don't tell it the tool? refuse to run? I have not used the tool table so far at all.
[12:44:03] <rene_dev_> that depents on your atc implementation
[12:44:04] <seb_kuzminsky> it runs, but it doesn't know what tlo and tool diameter are
[12:44:26] <rene_dev_> it should refuse to run, because it doesnt know if it needs to unload a tool, before picking up a new one
[12:44:35] <rene_dev_> both would crash the toolchanger
[12:44:43] <mozmck> ah, ok. It would seem that a warning or refusing to run would be good.
[12:45:11] <seb_kuzminsky> it could crash a nonrandom toolchanger, but not a random toolchanger (since that one tracks what's in the spindle)
[12:45:42] <seb_kuzminsky> i honestly don't have enough experience with nonrandom toolchangers to have a good intuition of how they should behave
[12:45:51] <rene_dev_> both should track that, because the hardware behaves exactly the same when you restart linuxcnc
[12:45:53] <jthornton> I can manually change tools on the atc then shut the machine down...
[12:46:21] <rene_dev_> you can always manually change tools, and not tell the machine. thats an operator error then
[12:46:41] <rene_dev_> if you want to crash, there is always a way
[12:46:53] <jthornton> it's a coder error to assume anything...
[12:47:39] <rene_dev_> im just thinking what the default is, and the default is that the tool remains in the spindle, unless you change it
[12:48:17] <rene_dev_> and when you change it, you have to tell the machine
[12:48:30] <rene_dev_> but when you dont change it, I dont feel the need to tell the machine
[12:48:58] <mozmck> That sounds reasonable.
[12:49:55] <mozmck> Either that or make it similar to the way homing works - if you power down you have to re-home before running code.
[12:50:31] <rene_dev_> at least #5400 and #_current_tool are the same now
[12:50:45] <mozmck> So tool could do the same - refuse to run code until you tell it what tool is in the spindle
[12:51:20] <rene_dev_> that would be very annoing for all the pepole that dont even use the tooltable, or dont have a toolchanger
[12:53:12] <mozmck> Yeah, refusing to run could have an ini override or something like homing does.
[12:54:20] <rene_dev_> yes, my idea was a ini setting which tool to load on startup. last, 0, or -1
[12:56:12] <mozmck> I'm thinking of something like NO_FORCE_HOMING = 1
[12:56:55] <rene_dev_> the assumption of not needing to track the pindle tool on non-random changers is just wrong
[12:56:56] <mozmck> So NO_FORCE_TOOL = 1 would mean it would run even if no tool is defined/selected - otherwise it would refuse to run code.
[12:57:15] <rene_dev_> because you do need to track, because you need to know if and where to put the tool when you pick up a new one
[12:57:49] <rmu> +1 for persisting the tool in the spindle
[12:58:41] <rmu> it would be most annoying in a ATC if you have a tool loaded and the machine doesn't know which one it is
[12:58:45] <rene_dev_> I have observed pepole using machines I retrofitted
[12:59:23] <mozmck> rene_dev_: have you had a chance to look at the reverse-run issue again?
[13:00:34] <mozmck> bbl - rebooting
[13:00:40] <JT-Shop> anytime you force machine specific configuration options you will be wrong at least half the time... things like that should be up to the system integrator to configure
[13:00:45] -!- mozmck has quit [Quit: Leaving.]
[13:02:47] -!- mozmck has joined #linuxcnc-devel
[13:02:50] <rmu> ini setting that specifies last, specific tool, or empty seems sensible
[13:03:36] <rene_dev_> mozmck rob promised to look at it, but he didnt yet. I think I know how to fix it, and will try again this week.
[13:03:55] <mozmck> rene_dev_: ok
[13:04:08] <rene_dev_> yes, its always integrator specific
[13:04:43] <rene_dev_> but my observation is, that usually if you tell pepole that they need to tell the machine which tool is in the spindle
[13:04:51] <rene_dev_> the answer will be that they dont know :D
[13:04:56] <rene_dev_> because they forgot to label them
[13:05:09] <jepler> mozmck: can you use your core dump and generate full backtrace? Might need to pastebin it, it could be long.
[13:05:21] <rene_dev_> and if they get it wrong, you will crash the toolchanger
[13:05:22] <jepler> you can do that with the "bt" command in gdb
[13:05:34] <mozmck> jepler: I'll try...
[13:07:13] <jepler> you could probably substitute just the 2.7 code for the atomic operation and see if that fixes anything; I think it's unlikely, but this whole thing feels unlikely.
[13:07:36] <jepler> or you could 'git bisect' between 2.7 and master, but that would be a lengthy process
[13:08:01] <rene_dev_> they are tools to automate that
[13:08:02] <mozmck> https://pastebin.com
[13:22:10] <jepler> thanks
[13:22:25] <jepler> so this is occurring when halcmd reinitializes itself after spawning a sub-program (for 'loadrt' in this case)
[13:23:50] <jepler> it looks like a hal file with a long string of 'loadusr -w true' should be able to trigger it more reliably
[13:23:53] <jepler> maybe repeat it 1000x
[13:25:13] <jepler> each time rtapi_init is called, it has to choose a new "unique identifier" which is just an integer that counts up forever
[13:25:50] <jepler> but in the case of the crash, the shared memory segment that has just been supposedly mapped by rtapi_shmem_getptr, is not valid
[13:27:15] <jepler> printf("note [%5d] uuid_mem = %p\n", getpid(), uuid_mem);
[13:27:16] <jepler> if (uuid_shmem_base == 0) {
[13:27:35] <jepler> on my system, the same address is printed each time, at least within the same process, because the first mapping made is retained even if rtapi_exit() is called
[13:27:45] <jepler> errr
[13:27:50] <jepler> maybe that's the change actually...!
[13:28:10] <jepler> in 2.7 rtapi_exit does nothing, in master it deletes the segment
[13:28:55] <jepler> commit 7c0c274d93b4fa5e24ce53f1569eae02d01c2721
[13:29:03] <jepler> uspace: delete the "uuid"(sic) shared memory at exit
[13:29:04] <jepler> Reported-By: Edward Tomasz Napierala <trasz@FreeBSD.org>
[13:29:26] <jepler> maybe try reverting this
[13:29:37] <jepler> I don't know what problem it was supposed to address :-/ that's all the notes I Have
[13:29:41] <jepler> afk again
[13:39:40] <mozmck> log
[13:39:41] <c-log> mozmck: Today's Log http://tom-itx.no-ip.biz:81
[13:43:56] <mozmck> jepler: I tried reverting that commit and it still has the failures here
[15:09:09] -!- JT-Shop2 has quit [Quit: Leaving]
[15:09:29] -!- JT-Shop2 has joined #linuxcnc-devel
[15:10:20] -!- JT-Shop2 has quit [Client Quit]
[15:10:40] -!- JT-Shop2 has joined #linuxcnc-devel
[15:55:49] -!- andypugh has joined #linuxcnc-devel
[15:56:14] <andypugh> jepler: Did you get the latest Laundry file?
[15:56:47] <andypugh> I think I know Squadron Leader Bradshaw.
[15:57:15] <andypugh> https://www.linkedin.com
[15:58:03] <andypugh> (Given that both myself and the chap in question were at Stross’s birthday party in 2014 it can’t be coincidence.
[16:31:42] -!- mozmck has quit [Quit: Leaving.]
[17:28:37] <jepler> andypugh: I've gotten a little ways into it. Can't use up a good book like that all at once
[17:28:50] <jepler> man I want to go to Stross's birthday party
[17:29:22] <jepler> mozmck: dang I really wish I could reproduce this failure locally! Yours is a 64 bit system, right? (mine is)
[17:39:38] <andypugh> I got a bit lucky. It was at Worldcon and Stross has exactly the same birthday as a friend of mine, and he knows Stross, so they held a joint 50th party, and I was one of Cosmic’s mates.
[17:39:55] <jepler> indeed
[17:42:29] <andypugh> Squadron Leader Bradshaw has written some SF himself, one is pretty good “First to the Moon” joint with Steven Baxter. But I can’t currently find the text online
[17:43:57] <andypugh> (Alternate history, no WW1, british moon shot in wood and bakelite rocket ends in partial disaster)
[17:44:47] <andypugh> Basically the British Interplanetary Society moon expedition design from 1936
[17:47:39] <andypugh> (Incidentally, did you look at the Linkedin profile, I think his hobby is collecting degrees, he has three Masters degrees and a couple of others on top)
[18:09:34] -!- mozmck_lp has joined #linuxcnc-devel
[18:10:12] <mozmck_lp> log
[18:10:12] <c-log> mozmck_lp: Today's Log http://tom-itx.no-ip.biz:81
[19:00:12] <rene_dev_> andypugh Dont know what you are talking about, but I know a guy called Bradshaw that writes and publishes books: http://www.jrbpub.net
[19:27:36] <andypugh> Different one.
[19:28:26] <andypugh> The latest Charles Stross book (SF) features a character fairly clearly named after one of my friends from college.
[20:14:33] -!- andypugh has quit [Quit: andypugh]
[20:15:34] -linuxcnc-github:#linuxcnc-devel- [13linuxcnc] 15zultron commented on issue #213: IIRC there was discussion about this topic on the emc-developers list several years ago. This was at least on the roadmap for Machinekit, but I don't think it was ever implemented.... 02https://github.com/LinuxCNC/linuxcnc/issues/213#issuecomment-438879316
[21:02:42] -!- mozmck_lp has quit [Quit: Leaving.]
[21:04:34] -!- mozmck has joined #linuxcnc-devel
[22:18:04] -!- mozmck has quit [Ping timeout: 244 seconds]
[22:20:28] -!- JT-Shop has quit [Remote host closed the connection]
[22:21:30] -!- JT-Shop has joined #linuxcnc-devel
[22:43:16] -!- mozmck has joined #linuxcnc-devel
[23:11:31] -!- c-log has quit [Ping timeout: 250 seconds]
[23:13:29] -!- c-log has joined #linuxcnc-devel
[23:31:21] -!- Tom_L has joined #linuxcnc-devel
[23:42:28] -!- Tom_L has quit [Quit: Leaving]
[23:42:36] Tom_itx is now known as Tom_L