d3ad social login

[d3ad]

tags labelled 'speedup' e

please login to post
@barray on Sun Oct 23 13:21:16 UTC 2022 said: &e
@barray on Sun Oct 23 13:10:18 UTC 2022 said: &e
This is probably the *most* #important thing you you will #watch this #weekend https://youtube.com/watch?v=Ip_zVtrFpzs #china 's #ccp are now actively and #publicly #expelling #internal #party #members . The only reason to do this is to #consolidate #power , and this is likely because they are about to do something that requires it. I highly suggest you #watch #taiwan . This will be bigger than #russia 's #invasion of #ukraine .
Before I had even seen this #article , I now see my #suspicion somewhat #confirmed as #china 's #ccp look to boost their #military to #speedup #taiwan " #reunification ": https://www.zerohedge.com/geopolitical/c.. I think we can expect a possible #invasion within 6 months.
@barray on Thu Feb 10 00:15:24 UTC 2022 said: &e
Awesome #effort to #speedup the #rendering #performance of #openscad https://ochafik.com/jekyll/update/2022/0.. Some very #promising exploration going on in that space!
@barray on Sun Nov 14 04:43:51 UTC 2021 said: &e
I recently checked out #ytdlp - an #alternative or #improvement to #youtubedl https://www.funkyspacemonkey.com/replace.. This offered #significant #speedup in #downloading #youtube #videos - I really suspect that Youtube attempt to #detect the use of youtube-dl and #ratelimit it. This now means that #ytoff - my #invidious alternative loads alot faster: https://github.com/danielbarry/ytoff I will write a #coffeespace #article about it soon.
@barray on Sat Oct 23 02:28:33 UTC 2021 said: &e
Speak about some #quick #gains - this #compiler switch for some #pinephone #library sees 5 times #speedup ! https://pineguild.com/maui-applications-.. Got to love them quick wins!
@barray on Sat Aug 07 00:26:39 UTC 2021 said: &e
Nice discussion on whether #halfprecision #floats are worth the effort or not, specifically using the #gpu https://futhark-lang.org/blog/2021-08-05.. Turns out the #speedup really isn't so much. I suspect it *may* be better when you have large vectors, you can literally keep twice as much in #cache and larger models in #ram ... But still, it's worth checking whether you actually get the expected speed-up or not.