Wayne (@leading) • Hey
Portfolio manager in Web3 and DeFi
Publications
- The evolution of employee headcount for the next-gen AI companies
- The next-gen AI companies: headcount, funding and valuation
- A density map of where the AI talent lives
- Top universities with best AIGC research and most talents
- Top companies in AIGC with the most talents
- The evolution of employee head count of top AI companies
- The most successful AI companies with strong fundraise
- Crypto primitives can change unit economics in many aspects
- The larger opportunities at stake are ahead of cryptos. RWA may be the route
- Bitcoin is backed by energy and compute monetization
- The new models for compute and connectivity
- Exponential demand for computing and connectivity. We need a new industrial revolution
- Top new models for energy nowadays
- Pure LK-99 sample behaves like a superconductor, live
- Here is how critical current and magnetic field of LK-99 evolve with critical temperature
- The resistivity of LK-99 fell to almost zero in two steps at 105 and 65 degree Celsius. Superconductivity manifest in room temperature and ambient pressure! It took decades of work
- LK-99 in the superconducting state - copper induces structural change and introduces Cooper pairs
- The superconducting LK-99 is a crystal with the following unit cell structure
- The middle and end product of the superconducting material, LK-99
- Breaking! We might be witnessing the first room-temperature ambient-pressure superconducting material discovered. A new era starts from here if this is verified
- Our energy consumption increases every year, most of which is still fossil oil
- African export destinations and volume by continent and country
- Top 10 highest base salary positions in Google, 2023
- Comparison of GPT-4 and GPT-3.5 defense against AIM attacks
- Diagram of the nuclear fusion device
- Evaluation on Llama 2 and other fine tuned LLMs on different safety datasets
- Safety data scaling trends during Llama 2 training, showing increasing safety data is important
- The safety RLHF improved Llama 2 according to reward model score distributions
- Language distribution of pretraining data for Llama 2 in percentage. 90% is English, performance is naturally the best with English prompts
- Demographic representations in the pretraining corpus of Llama 2 showed some skews
- Pretraining data toxicity during the Llama 2-Chat training
- Human evaluation results, comparing Llama 2, open and closed source models
- Evolution of Llama 2-Chat and ChatGPT, evaluated by the reward model and GPT-4, respectively
- Scaling trends at varied data size in LLMs for the reward model
- Reward model results of LLMs on human prefrence benchmarks
- Statistics of human preference data for reward modeling
- Llama 2 compared to closed source models on academic benchmarks
- Overall performance on grouped academic benchmarks, Llama 2 compared to open source based models
- This one is interesting - the CO2 emission during pretraining of Llama 2
- The training loss of the Llama 2 model at different model sizes
- Llama 2 outperforms other LLMs in safety human evaluation results, making it a good alternative to close source LLMs
- The training of Llama 2-Chat, an open source model released recently by Meta. Free and powerful
- Ablation results comparison in different methods of ResNet
- Perplexity results in language models. ResNet outperforms all others again
- Inference cost comparison of Transformer and ResNet at 6.7B model size. ResNet outperforms clearly
- Training cost comparison among multiple models
- Zero-shot and few-shot learning comparison, ResNet vs. Transformer
- ResNet perplexity decreases with model size faster than Transformer
- Model comparison in more dimensions, Resnet and conventional methods
- Comparison between Resnet and Transformer in several dimensions