User not logged in - login - register
Home Calendar Books School Tool Photo Gallery Message Boards Users Statistics Advertise Site Info
go to bottom | |
 Message Boards » » AI at NCSU Page 1 [2], Prev  
StTexan
#StayStrong
12405 Posts
user info
edit post

Page 2

Fellas, hire me for something. I'm sure I could make money

1/24/2026 7:31:56 AM

CaelNCSU
All American
7860 Posts
user info
edit post

We only hire robots now and worry about our own jobs. At least I do.

Ironically current company is mega scared of building new stuff even though the cost of doing stuff is low. There is the operational concern about how to support all this AI generated software.

1/24/2026 1:51:30 PM

StTexan
#StayStrong
12405 Posts
user info
edit post

Not disappointed if robot taxis take jobs of door dash folks, etc

1/29/2026 3:30:46 AM

OmarBadu
zidik
25116 Posts
user info
edit post

Quote :
"There is the operational concern about how to support all this AI generated software."


we don't have a fully automated solution yet for pure backend issues but for frontend ones we are using Sentry.io to detect issues and then we have claude code 'autofix' as issues are detected - we are a few months into doing this and it's working incredibly well now

we started off by limiting to a max of 2 open PRs while we worked out kinks and refined claude.md file and corresponding rules files - initially it was doing stupid things like 'fixing' a frontend issue by calling a new backend API but it would only propose that a new backend API needs to be created without actually creating it for example - now it one-shots most fixes

now it runs almost entirely on its own and on the occasion it makes a mistake we make refinements - the effort level put in compared to the gains we are seeing aren't even close - it's made everyone's lives better unless you despise reviewing PRs

1/29/2026 10:04:12 AM

CaelNCSU
All American
7860 Posts
user info
edit post

That's a huge win moving from theoretical to minimal input. It's wild all the workflows you see popping up mixed with the navel gazing about "maybe this will be probable" and "ai doesn't really work and just copies code". Super cool. Terrifying, but cool.

Have you seen it flag something like a cert or dns error that was a legitimate error but not a front end error per se?

1/29/2026 10:33:46 AM

OmarBadu
zidik
25116 Posts
user info
edit post

Quote :
"Have you seen it flag something like a cert or dns error that was a legitimate error but not a front end error per se?"


don't think we've seen specifically a cert or dns issue but we have seen a sporadic cors / csp issue while we were doing some url migrations

1/29/2026 11:03:37 AM

CaelNCSU
All American
7860 Posts
user info
edit post

Are you grounding it at all with something like speckit or conductor? Or just your own prompt tweaking to help the guard rails?

1/29/2026 11:57:38 AM

OmarBadu
zidik
25116 Posts
user info
edit post

mostly rolled ourselves with a combination of prompt tweaking / claude.md rules & memory management

as mistakes are found in PRs we tweak along the way to make improvements - sometimes we change the overall claude.md but more often we are updating a claude.md file in a specific folder where claude just needed more context - it's everyone's job to check-in improvements to our md files although some are better than others

1/29/2026 3:50:04 PM

moron
All American
35649 Posts
user info
edit post

I started working with the organizers to help plan the meetups. I saw someone from red hat sign up to present— was that you snewf?

2/4/2026 3:15:47 PM

moron
All American
35649 Posts
user info
edit post

Resurrected an old device with ai

I have an MAudio Transit USB that hasn't had working Mac OS Drivers for 15 years. I decompiled the last known Mac drivers and told AI to make it work-- it took about 2 hours of back and forth (and my own knowledge on driver arch for this device) but it works!

https://ironj.github.io/maudio-transit/

2/5/2026 10:46:07 AM

moron
All American
35649 Posts
user info
edit post

This went semi viral on bsky, thought you all should be in the loop too:

Quote :
" 16000 tokens per second on a decent model. This type of speed is the future. Opens up an entirely new class of user experiences

https://www.reddit.com/r/LocalLLaMA/comments/1r9e27i/free_asic_llama_31_8b_inference_at_16000_toks_no/

"

2/20/2026 5:55:01 PM

 Message Boards » Tech Talk » AI at NCSU Page 1 [2], Prev  
go to top | |
Admin Options : move topic | lock topic

© 2026 by The Wolf Web - All Rights Reserved.
The material located at this site is not endorsed, sponsored or provided by or on behalf of North Carolina State University.
Powered by CrazyWeb v2.39 - our disclaimer.