AI-generated ecommerce visuals in minutes
pomelli-ai.comยท18hยท
Discuss: Hacker News
๐Ÿค–AI Coding Tools
Flag this post
Mastering Web Internationalization
w3.orgยท8hยท
Discuss: DEV
๐Ÿ’กLSP
Flag this post
Norway Leads Global EV Adoption
oilprice.comยท13hยท
Discuss: Hacker News
๐ŸšฒBike
Flag this post
Understanding Tokenization in Large Language Models
pub.towardsai.netยท9h
๐ŸŽ“Model Distillation
Flag this post
Benchmarking Large Language Models and Privacy Protection
priv.gc.caยท1d
โšกONNX Runtime
Flag this post
Show HN: Extrai โ€“ An open-source tool to fight LLM randomness in data extraction
github.comยท14hยท
Discuss: Hacker News
๐Ÿ•Ruff
Flag this post
GeneFlow: Translation of Single-cell Gene Expression to Histopathological Images via Rectified Flow
arxiv.orgยท4h
๐Ÿ”„ONNX
Flag this post
Efficiency vs. Alignment: Investigating Safety and Fairness Risks in Parameter-Efficient Fine-Tuning of LLMs
arxiv.orgยท4h
๐Ÿ”„ONNX
Flag this post
Analyzing Sustainability Messaging in Large-Scale Corporate Social Media
arxiv.orgยท4h
๐Ÿ”„ONNX
Flag this post
MaGNet: A Mamba Dual-Hypergraph Network for Stock Prediction via Temporal-Causal and Global Relational Learning
arxiv.orgยท4h
โšกONNX Runtime
Flag this post
Rethinking Cross-lingual Alignment: Balancing Transfer and Cultural Erasure in Multilingual LLMs
arxiv.orgยท4d
๐Ÿ“‰Model Quantization
Flag this post
ParaScopes: What do Language Models Activations Encode About Future Text?
arxiv.orgยท4h
๐ŸŽ“Model Distillation
Flag this post
Bridging Vision, Language, and Mathematics: Pictographic Character Reconstruction with B\'ezier Curves
arxiv.orgยท4h
๐ŸงฉAttention Kernels
Flag this post
Hyper Hawkes Processes: Interpretable Models of Marked Temporal Point Processes
arxiv.orgยท4h
๐ŸŽ๏ธTensorRT
Flag this post
Phased DMD: Few-step Distribution Matching Distillation via Score Matching within Subintervals
arxiv.orgยท1d
๐ŸงฎcuDNN
Flag this post
MISA: Memory-Efficient LLMs Optimization with Module-wise Importance Sampling
arxiv.orgยท4h
๐Ÿ“ŠGradient Accumulation
Flag this post
Probabilistic Robustness for Free? Revisiting Training via a Benchmark
arxiv.orgยท4h
๐Ÿ“ŠGradient Accumulation
Flag this post