🐿️ ScourBrowse
LoginSign Up
You are offline. Trying to reconnect...
Copied to clipboard
Unable to share or copy to clipboard
🏡 Local running LLMs

AI privacy

A lightweight Cloudflare Dynamic DNS shell script
github.com·8h·
Discuss: Hacker News
⌨️Prompt Engineering
Systemd's Nuts and Bolts
medium.com·13h·
Discuss: r/programming
⌨️Prompt Engineering
You Shall Not Pass: Fine Grained Access Control with Row Level Security
cockroachlabs.com·1d·
Discuss: Hacker News
🔮AI prompt engineering tools
The Impact of Event Data Partitioning on Privacy-aware Process Discovery
arxiv.org·1d
⌨️Prompt Engineering
Model Collapse Is Not a Bug but a Feature in Machine Unlearning for LLMs
arxiv.org·2d
⌨️Prompt Engineering
Winning and losing with Artificial Intelligence: What public discourse about ChatGPT tells us about how societies make sense of technological change
arxiv.org·7h
🔮AI prompt engineering tools
Detection of Intelligent Tampering in Wireless Electrocardiogram Signals Using Hybrid Machine Learning
arxiv.org·7h
🤖AI
RVISmith: Fuzzing Compilers for RVV Intrinsics
arxiv.org·2d·
Discuss: Hacker News
⌨️Prompt Engineering
CAVGAN: Unifying Jailbreak and Defense of LLMs via Generative Adversarial Attacks on their Internal Representations
arxiv.org·1d
⌨️Prompt Engineering
Lilith: Developmental Modular LLMs with Chemical Signaling
arxiv.org·2d
🔮AI prompt engineering tools
Evaluation of OpenAI o1: Opportunities and Challenges of AGI
arxiv.org·1d
🤖AI
Subgraph Counting under Edge Local Differential Privacy Based on Noisy Adjacency Matrix
arxiv.org·7h
🤖AI
From LLMs to Actions: Latent Codes as Bridges in Hierarchical Robot Control
arxiv.org·1d
🔮AI prompt engineering tools
MGAA: Multi-Granular Adaptive Allocation fof Low-Rank Compression of LLMs
arxiv.org·2d
🤖AI
Wallets as Universal Access Devices
arxiv.org·7h
⌨️Prompt Engineering
An autonomous agent for auditing and improving the reliability of clinical AI models
arxiv.org·1d
🤖AI
Phantom Subgroup Poisoning: Stealth Attacks on Federated Recommender Systems
arxiv.org·7h
🤖AI
Re-Emergent Misalignment: How Narrow Fine-Tuning Erodes Safety Alignment in LLMs
arxiv.org·2d
⌨️Prompt Engineering
GTA1: GUI Test-time Scaling Agent
arxiv.org·1d
⌨️Prompt Engineering
On Jailbreaking Quantized Language Models Through Fault Injection Attacks
arxiv.org·2d
⌨️Prompt Engineering
Loading...Loading more...
AboutBlogChangelogRoadmap