avatar
크립토 오프로드 (Let Winners Run)
Follow
Posts
avatar
크립토 오프로드 (Let Winners Run)
OpenMythos — An Open Source Attempt to Backtrack Claude’s Internal Structure Based on Public Papers → This is a theoretical reconstruction project that reassembles the Claude "Mythos" architecture from scratch using only publicly available research literature. → The core hypothesis is that Mythos is a Recurrent-Depth Transformer (Looped Transformer) that runs the same layer multiple times. → Unlike Chain-of-Thought, which spits out intermediate tokens, iterative inference occurs quietly within the latent space within a single forward pass. → The author explains that depth is addressed through looping, while breadth between regions is resolved through MoE (Mixture of Experts). → Along with the PyTorch implementation, supporting ideas such as stability proofs, scaling laws, and loop index embeddings are also organized. **How It Differs from Existing Transformers** Existing Transformers secure depth by stacking hundreds of different layers in series. The Looped Transformer reconstructed by OpenMythos divides the structure into three blocks. The flow is Prelude (Input Encoding) → Recurrent Block (Iterative Execution) → Coda (Output Cleanup), where the intermediate Recurrent Block is run multiple times with the same weight. This structure encourages deeper thinking by increasing the number of loops for more difficult problems. Key Update Rule In every loop, the hidden state is updated using the formula h_{t+1} = A·h_t + B·e + Transformer(h_t, e). The important point here is that the original input e is re-injected in every loop. Without this, the original signal would become blurred as the iteration lengthens, but input injection prevents this. Why Mythos Is Presumed to Have This Structure The author presents four reasons. First, the Looped Transformer passes systematic generalization, which handles combinations never seen during training. Second, even when trained with 5-hop inference, depth extrapolation is observed where increasing the number of loops during inference allows the model to solve 10-hop problems. Third, each loop corresponds to a single CoT step in continuous latent space, which was formally proven in the paper by Saunshi et al. (2025). Fourth, running k layers L times yields quality similar to a kL-layer model, allowing for depth to be achieved without parameter explosion. Note This repository is strictly a theoretical reconstruction based on public literature, and it has not been verified whether Anthropic actually built Mythos with this structure. The repository is under the MIT license and includes PyTorch example code and API documentation. Running the repository requires selecting an attention type (mla or gqa) and configuring MythosConfig. #LoopedTransformer #ClaudeMythos #MoE #AIArchitecture #OpenSource
MOE
2.19%
avatar
크립토 오프로드 (Let Winners Run)
MiroFish — Swarm Intelligence Project Summary Main Project: github.com/666ghj/MiroFish (Original) - One-line Summary: "A concise and universal swarm intelligence engine predicts everything" - GitHub Stars: 53,000+ (Ranked #1 Global Trending after launch in March 2026) - Developer: Guo Hangjiang (20-year-old Chinese student), completed via VibeCoding in just 10 days - Investment: Secured 30 million yuan (approx. $4.1M) within 24 hours of launch (Supported by Shanda Group) How It Works 1. Seed Input: Provides original documents such as news articles, financial reports, policy drafts, and novels 2. Knowledge Graph Construction: Automatically generates a digital world by extracting entities and relationships 3. Agent Simulation: Thousands to one million independent AI agents (unique personalities, long-term memory, behavioral logic) freely interact and socially evolve 4. Derivation of Predictions: Infers future developments by dynamically injecting variables from a "God's perspective" Technology Base OASIS (Open Agent Social Interaction Simulations) — Scales up to 1 million agents Possible, 23 social actions (Follow, Comment, Repost, Like, Mute, etc.) Referenced/Dependent Repositories (Technical Lineage) Layer: Conceptual Ancestor Repository: joonspk-research/generative_agents (Stanford Smallville) Role: ChatGPT-based 25-agent town simulation by 2023 — Starting point of the concept ───────────────────────────────────────── Layer: Core Engine Repository: camel-ai/oasis Role: Real-world simulation engine (Up to 1 million agents, 23 social actions). MiroFish is a product layer built on top of OASIS ──────────────────────────────────────── Layer: Memory Repo: Zep Cloud Role: Agent Long-term Memory ──────────────────────────────────────── Layer: Knowledge Graph Repo: GraphRAG Role: Seed Document → Extract Entities/Relationships ─────────────────────────────────────── Layer: Stack Repo: Vue Frontend + Python/FastAPI Backend Role: Productization Layer Related Fork/Derivative Projects 1. github.com/ByeongkiJeong/MiroF... / Unofficial Korean Version 2. github.com/nikmcfly/MiroFish-O... / Offline Run Version (Neo4j + Ollama Local Stack) 3. github.com/ChinmayShringi/Micr... / English Translation Fork 4. github.com/parety/Miro-Fish / Community Fork Post-Viral Star Growth: 18K (Early March) → 28K → 33K+ → Partially tallied 53K, Fork 1,900+ Version: Agent cap expanded from 700,000 to 1,000,000 in v0.1.2 Insight Cases: - Developer connected to Polymarket bot, simulated 2,847 people before trading → 338 trades reported $4,266 profit - Input all 80 chapters of *Dream of the Red Chamber* → Demo predicting the lost ending (based on character dynamics) - Corporate Applications: Public opinion crisis simulation, prediction of market reaction to supply chain shocks Limitations: Unable to predict ultra-short-term (15 minutes or less) financial microstructures or order flows — Specialized in "shaping public opinion" Organization: Guo Hangzhang went from Intern to CEO, incubated by Shanda Group (Chen Tianqiao) ───── Salon de l'IA (AI Salon): We will always strive to deliver helpful information and diverse insights! 🙏 #AI #Mirofish #SwarmIntelligence #GuoHangJiang
BZZ
2.24%
avatar
크립토 오프로드 (Let Winners Run)
04-17
I am more curious about Mythos than Opus 4.7. While the community is buzzing with excitement over the rapid arrival of Opus 4.7, Anthropic quietly unveiled the overwhelming metrics for Mythos alongside it. The transition from Opus 4.6 to 4.7 represented a generally stable improvement. Based on SWE-bench Pro, it rose by approximately 11 percentage points from 53.4% to 64.3%, and on Terminal-Bench, it increased by about 4 percentage points from 65.4% to 69.4%. Befitting a generational upgrade, the results were evenly boosted, but with increases ranging from single digits to the low tens of percent across benchmarks, it can be described as "steady progress." On the other hand, the jump from Opus 4.7 to Mythos Preview is on a completely different scale. SWE-bench Pro jumped 13.5 percentage points from 64.3% to 77.8%, and Terminal-Bench rose 12.6 percentage points from 69.4% to 82.0%. SWE-bench Verified climbed from its previous high of 87.6% to 93.9%. This additional increase in the high score range carries significance beyond mere numerical values, as this is an area where difficulty rises exponentially. In Humanity's Last Exam, the "with tools" benchmark also recorded the highest score among all models in the table, rising 10 percentage points from 54.7% to 64.7%. Meanwhile, the Cybersecurity benchmark saw a slight decline between 4.6 and 4.7 before Mythos surged 10 percentage points to 83.1%. However, Mythos is still in the Preview stage, and since measurements are unavailable for some benchmarks such as Scaled Tool Use, Financial Analysis, and Multilingual Q&A, its completeness as a general-purpose model requires verification. However, looking solely at the measured range, if Opus 4.7 was an incremental evolution of 4.6, Mythos appears to be the next-generation model we are truly hoping for. Holding on Mythos... #AI #Opus4.7 #Mythos #Anthropic #Claude
OPUS
3.47%
avatar
크립토 오프로드 (Let Winners Run)
04-15
The more we use AI, the clearer it feels that we must seize this opportunity. I believe that the generation that has experienced crypto will keenly realize just how crucial the roles of both individuals and organizations are in the early stages. As we enter the early stages of the technology cycle, small organizations will emerge that are valued at absurdly high valuations, and we may see people earning high salaries by "leasing" AI agents (the era of hiring AI agents). Once these FOMOs arise, the value of talent capable of effectively utilizing AI will increase even further. A market is coming where, based on a friend's assessment, one person earning 200 million won a year might appear more undervalued than five people earning 40 million won a year. "Hyperliquid, founded by Jeffrey Yan, is reported to have generated over $900 million in revenue with just 11 employees, making it one of the most profitable startups in the world in terms of revenue per employee." Just like this news, productivity per talent will become a more critical metric. To survive, you must become a "Non-Fungible Human." Now is the time to grow into an irreplaceable talent through outstanding learning ability, creativity, insight, and execution skills. Are you learning? Are you taking action? Are you managing your physical condition and gaining diverse experiences to maintain your creativity? Are you reading or encountering various insights to cultivate your insight? It would be good to reflect on these questions yourself. For an organization to survive, it must also become an "irreplaceable organization." We must continuously learn and share together. As production costs have decreased, we must make countless new attempts and execute them. Furthermore, we must provide an environment where this is possible. Just as the saying goes, "crisis is opportunity," I hope you can develop yourself in this era of job insecurity caused by the impact of the AI industry. ────────────────── Salon de l'IA (AI Salon): We will always strive to deliver helpful information and diverse insights! 🙏 #AI #IrreplaceablePerson #TechnologyCycle
AI
0%
avatar
크립토 오프로드 (Let Winners Run)
04-08
Interview with Google CEO Sundar Pichai ✍️ Facts and Summary www.youtube.com/watch?v=bTA8sj... 1. Infrastructure: Massive Capital and Physical Bottlenecks - TPU-based Vertical Integration: Google is currently operating 7th generation TPUs and is securing overwhelming inference speed and efficiency compared to competitors through a 'full stack' strategy of self-optimizing everything from chips to models and data centers. - Practical Physical Bottlenecks: While capital is sufficient, wafer production capacity, power supply, site permits, and even a shortage of skilled electricians are acting as real constraints on growth. - Memory Supply Critical Point: It is predicted that global memory supply will not be able to keep up with demand by 2026–2027, and that teams building efficient models will widen the gap during this period of 'supply constraints'. 2. AI Agents: Evolution, Not the End of Search - Pichai emphasized that search will not disappear, but will evolve into an 'Agent Manager' that performs complex tasks on behalf of users. - Asynchronous Task Execution: It is shifting from one-off searches that simply answer queries to executing time-consuming tasks in the background, such as planning trips or analyzing data. - 2027 Inflection Point: It is predicted that by 2027, the transition to "fully agentic" operations—where agents produce results without human intervention in non-financial processes within enterprises (e.g., automated forecasting, data integration)—will begin in earnest. 3. New Businesses Being Bets On: Space, Autonomous Driving, Robotics, Bio, Logistics, Quantum Computing - Space Data Center: A project to build data centers in space was launched with a small team to overcome ground-based power and site constraints. As a major shareholder holding approximately 10% of SpaceX's shares, Google anticipates synergies. - Robotics: Although it failed in the past due to a lack of AI technology, it is now achieving world-class performance in the robotics field once again by combining the spatial reasoning capabilities of the Gemini model with collaborations with Boston Dynamics, Agile, and others. - Bio: Focusing on designing actual new drug candidates and dramatically increasing the probability of clinical success, going beyond the level of predicting protein structures with AlphaFold. - Drone Delivery: Announcing that 40 million Americans will come within Wing's drone delivery service coverage area. - Quantum Computing: Continuously investing to solve areas impossible with modern computers, such as weather forecasting and molecular simulations, and recently producing meaningful results, it is moving closer to practical application. 💡Opinions - Hardware bottlenecks and the evolution of AI agent search are well-known topics. These are points worth noting for the new businesses Google is focusing on. - I am curious about what kind of synergy can be created with SpaceX in the future. While Elon Musk competes with Google AI, they might cooperate in the space sector. - Google, possessing AlphaFold which includes a Nobel Prize-winning team, is the Big Tech company with the highest potential to hit the jackpot in biotech.
TON
0.07%
avatar
크립토 오프로드 (Let Winners Run)
04-06
OpenAI, the creator of ChatGPT, the epitome of AI, faces a moment of proof. OpenAI has discontinued Sora, a project that had even secured a $1 billion investment from Disney. This comes just six months after its launch. While the official reason was the strategic reallocation of computing resources, the underlying factor is the increasingly fierce competitive landscape. With Google’s Veo and ByteDance’s Seedance rapidly surging in the video generation field, maintaining Sora—which was consuming $1 million a day while having fewer than 500,000 users—would not have been a strategically sound move. Currently, ChatGPT holds the largest user base among Q&A-style conversational AI chatbots. However, in multimodal generative AI—such as image, video, and music generation—Google is rapidly dominating a broader domain with the Gemini 3 series, Veo, and Nano Banana. Anthropic’s Claude exerts the greatest influence in the practical realm of creating code and agents. Thanks to Claude Code's overwhelming performance, it established itself as a de facto standard tool among developers, and according to the WSJ, this was the direct reason behind the discontinuation of Sora. This is where OpenAI's choice is interesting. They have opted for a competition with Anthropic in coding and agents rather than a multimodal competition with Google. By putting Codex at the forefront and deploying GPT-5.4, they are expanding across the board to include desktop apps, CLIs, IDE extensions, and the cloud. This sends the message that they intend to concentrate the computing resources previously used for Sora on this area. The landscape of the AI market is shifting. From "who can do the most" to "who creates the most tangible value." Since AI ultimately uses limited resources, it cannot do everything. Selection and concentration are necessary. + OpenAI's diversification strategy involves using invested assets to acquire and merge with companies capable of building ecosystems. (This situation shows the CEOs focusing more on the VC domain, where Sam excels, and on the engineering domain of model research at Anthropic.)
ANTHROPIC
3.11%
avatar
크립토 오프로드 (Let Winners Run)
04-06
Sam Altman Interview (Sora Discontinuation, OpenAI’s 3 Major Focus Areas, IPO, Future of AI) ✍️ Facts and Summary www.youtube.com/watch?v=mJSnn0... 1. Reasons for Sora Discontinuation - Strategic Reallocation of Computing Resources: The decision to discontinue Sora was made to focus the massive computing power (Compute) consumed by video generation on developing next-generation intelligent models that will create higher value. - Change in Business Priorities: Despite the major partnership with Disney, it was determined that projects that will transform the foundations of society, such as 'AI Researchers' beyond simple content creation, are more urgent. 2. OpenAI’s 3 Major Focus Areas - Automated Researcher: Focusing on solving humanity's difficult challenges, such as disease treatment and energy revolutions, by building systems capable of achieving 10 years' worth of scientific progress in just one year. - Automated Enterprise Operations: Aiming to build an economic ecosystem where AI handles everything from coding to operations, enabling 'one-person unicorn companies' run by a single person. - Personal Agent (Super Assistant): It implements an "invisible assistant" that reduces daily friction by grasping the user's entire context and proactively performing tasks such as web surfing and message processing. 3. IPO (Initial Public Offering) and Governance - Hinting at Listing Possibility: While maintaining a fluid stance regarding the possibility of an IPO within this year, stating "it could happen, but nothing is confirmed," the company is already valued at trillions. - Emphasis on Responsible Management: It was predicted that even if the CEO role is technically replaced, the world's demand for "human leaders" who make socially important decisions and take responsibility will continue. 4. The Future of AI - Great Transformation of Cognitive Capabilities: Before long, AI within data centers will perform more intellectual activities than the entire human brain, marking a massive turning point in human history. - Abundance and a New Social Contract: It was emphasized that the introduction of new economic and tax systems, such as "universal basic income" or "citizen equity ownership," is necessary to share the resource abundance brought about by AI. 💡Opinion - There used to be a rumor that WorldCoin would be applied to the OpenAI SNS, but it seems to have fallen through. - WorldCoin is the coin that borrowed Sam Altman's name to talk about basic income and iris recognition. Go to zero
SORA
0%
avatar
크립토 오프로드 (Let Winners Run)
04-02
The full story of the Figure TVL crash on Defillama. 0. Defillama clashed with Provenance. The incident began when Provenance and Figure claimed a $12 billion RWA TVL. (Since the total TVL on Defillama is around $80-120 billion, this is simply a massive amount.) However, when Defillama verified this based on actual on-chain data, the claims didn't add up. The verified on-chain data shows: - Assets used for transactions: BTC $5M, ETH $4M - Proprietary stablecoin YLDS: $20M - RWA transfers: Mostly processed by separate accounts rather than the holders' wallets. In other words, the structure was not one where users held assets and traded on-chain, but rather a form where transactions running off-chain were merely recorded on the blockchain. This occurred around the time Figure was preparing for its IPO. Defillama CEO Oxngmi accused Figure of inflating the numbers to artificially raise the stock price. I wrote a very long article on X. zkSync was criticized along with it. Roughly for similar reasons. From DefiLlama's perspective, this is the problem. - Users do not directly move assets using their own keys. - Almost no on-chain liquidity. So, DefiLlama made some adjustments after that. - Strengthened due diligence on RWA TVL overall. - Introduced a policy to remove unproductive/unverifiable assets. 1. The fallout spilled over to Plume. DefiLlama changed its TVL criteria starting in late 2025. Previously, any asset held in a contract counted as TVL, but now all liquidity without actual economic activity is being removed. Specifically, single depositor vaults, LPs without transactions, lending pools without borrowers, and unverified wrapper assets are all excluded. It seems that as DefiLlama went about this, they ended up cleaning up Plume's data as well. Upon examining Plume, it was found that a significant portion of the initial TVL was concentrated in protocols like Pell, which were determined to be vaults based on a small number of wallets with almost no actual user activity. DefiLlama classified this as unproductive liquidity and removed it in large quantities. As a result, Plume's TVL, which was $150M at the time, plummeted to the $20M range. To be extremely specific, In the Pell network esBTC vault, the wallet 0xD7a3ecd8086100C9cD3E50B33Ba3061a9f3AFFE3 <— this single wallet deposited a total of $100M. explorer.plume.org/tx/0x639ceb... ($10M) explorer.plume.org/tx/0xa29388... ($20M) explorer.plume.org/tx/0x989381... ($20M) explorer.plume.org/tx/0xa4988e... ($10M) explorer.plume.org/tx/0x120858... ($20M) explorer.plume.org/tx/0xf52b0e... ($20M) The same goes for the Pell network YBTC.B vault. 0xE8ccbb36816e5f2fB69fBe6fbd46d7e370435d84 alone deposited about $34M. explorer.plume.org/tx/0xa0ffd1... ($6.8M) explorer.plume.org/tx/0xf4fb7f... ($13.7M) explorer.plume.org/tx/0xb4a70f... ($13.7M) So, to put it another way, two wallets had put a total of $134M worth of BTC into the Pell network on Plume. For reference, the Pell Network TVL also crashed on the same day. It just plummeted vertically from over $200M to almost zero. I guess there really weren't that many people depositing into the Pell Network...? So, in the end, what I said about a month ago was half right. I still don't know if some kind of deal was exchanged between Plume/Pell and the whales doing the depositing (though I suspect it almost certainly was...), but since only two wallets were used, I'll never know half of it, and I was right half of it. Of course, the vertical drop in Plume TVL wasn't because the whales doing the depositing suddenly withdrew, but because Defillama changed the TVL calculation method. The numbers roughly match up too. It dropped from 150M to 20M, but 134M was excluded from the calculation. So, the conclusion is that, somehow, I was roughly right. It plummeted vertically from $200M to almost zero. 2. Difference in TVL Calculation Methods Between rwa.xyz and DeFiLlama RWA.xyz measures completely differently from DeFiLlama. DeFiLlama only looks at capital fully contained within DeFi protocols, whereas RWA.xyz measures the total value of real-world assets existing on the chain. In other words, assets such as tokenized government bonds, funds, and credits are all included even if they only exist in wallets. Some even combine off-chain data to calculate NAV. Therefore, since DeFiLlama is an indicator of DeFi activity and RWA.xyz is an indicator of asset issuance/circulation volume, it is a normal structure for there to be a difference of tens of times even on the same chain. Based on distributed assets, Plume maintains the 9th position among all chains with an RWA value of $349M as of today. 3. The Direction Plume Pursues Just before or immediately after the Provenance incident (since it happened in the past, the timeline is uncertain), Plume wrote an article titled "TVL Is Meaningless": Part 1 / Part 2 Plume Network focuses more on metrics such as the number of wallets holding RWA and DAU than on TVL. In particular, since RWA is meaningful simply by being stored in a wallet, TVL concentrated in a single wallet is meaningless. If numerous users actually hold and utilize even small amounts, it can be considered adoption. However, since the number of wallets can be manipulated at will, it would be necessary to investigate this in detail using tools like Arkham. Although the foundation is unlikely to actually go so far as to manipulate it. 4. Conclusion There is no definitive answer as to who is right or wrong between rwa.xyz and defillama. Rather, it seems we need to research which methodology is appropriate to use in which situation. When tokenization was not yet mainstream, you could just clear your mind and look at the TVL, but times have changed, and this problem has arisen. The real problem is that for old-timers like me, whenever I learn about a protocol or blockchain I haven't heard of, I go to defillama and check the TVL first. This has practically become a first impression test, but that isn't everything. Even old-timers need to change their mindset a bit. Links - Full text of 0xngmi's tweet - Question about why Plume Network's TVL suddenly decreased + 0xngmi's original response - Plume Network's shattered TVL - rwa.xyz Plume Page - Defillama TVL Methodology *Please let me know if the information is incorrect. I apologize for posting last time without doing proper research. *A representative of a Ripple ecosystem project, whom I met while participating in Devrell on Ripple, is looking for a marketing intern: Check the job posting. Please send your documents to sivax@sivax.io.
ZK
2.64%
loading indicator
Loading..