avatar
UNICORN⚡️🦄
129,703 Twitter followers
Follow
$BTC ⚡️⚡️⚡️⚡️⚡️⚡️⚡️⚡️ ⚡️⚡️ ⚡️⚡️⚡️⚡️⚡ #OKX WEB3入口一个就够https://t.co/pZCFItmdrD #Binance 近3亿用户的共同选择https://t.co/KoDvNU04iq
Posts
avatar
UNICORN⚡️🦄
Thread
The conversation should be objective. Although I have a good relationship with everyone at OKX, back in 2018, OKX's contract risk control had significant flaws. There were no restrictions on position size and leverage, and the insurance fund was limited, relying mainly on social sharing. Meanwhile, BitMEX had already implemented the ADL mechanism. Today, OKX has addressed and improved these risk control deficiencies. Going back 8 years, after the systemic loss, the platform bore 2500 BTC of the loss, recovered a portion from profitable users, and froze 517 BTC from users who suffered margin calls. Article 6.2 of the agreement allowed for the closure, restriction, or rollback of trades. The terms were biased towards the platform, but the actions were within the scope of the textual authorization. The terms are controversial, but the operations were within the framework of the agreement. Disputes can only be resolved through legal channels. On Twitter, the public opinion regarding this 8-year-old issue is difficult to assess in terms of FUD. Let's look at the user behavior specifically. 1/ Whether market manipulation occurred: In 2018, a $450 million long position was opened. Given the market depth at the time, this size inevitably impacted prices, revealing a risk control loophole on the platform. Such a large position should not have been allowed to trade. Freezing the position after triggering the alarm was a risk control measure. With 20x leverage, a fluctuation of less than 5% would result in liquidation. The claim that the position would automatically be reduced if it wasn't frozen is illogical. However, the combination of high leverage and an extremely high position ratio constitutes a typical structure of manipulated risk, ultimately leading to a margin call and systemic risk. Clause 6.2 of the agreement allows for closing, restricting, or rolling back trades. The clause is biased towards the platform, but the actions were within the scope of the textual authorization. 2/ Was freezing 517 BTC reasonable? The announcement shows that a margin call occurred. The platform used 2,500 BTC to cover the losses and then deducted profits from users through social sharing, while simultaneously freezing 517 BTC. The total margin call size was approximately 4,000 BTC or more. The agreement stipulates that sub-accounts and wallets are considered as a unified asset. The clause is controversial, but the operation was within the framework of the agreement. Rather than dwelling on public opinion and FUD over issues from 8 years ago, it's better to pursue legal action. 8 years ago, OKX's risk control did indeed have the above-mentioned flaws, but through 8 years of gradual improvement, I personally feel very confident using it. OKX twitter.com/UnicornBitcoin/sta...
BTC
3.39%
avatar
UNICORN⚡️🦄
Thread
AI big data models have indeed evolved to the point where they can replace many functions. For example, many traditional functions such as fairness, arbitration, and auditing can be performed by AI. Regarding the recent FUD (Fear, Uncertainty, and Doubt) issues surrounding Binance's inability to deposit and withdraw funds and the freezing of accounts, BIRDEYE used an AI big data model to systematically analyze and break down the situation to see what was actually happening. The core idea of BIRDEYE's MAS framework is: Instead of pre-assuming who is good or bad, it first scores accounts across five dimensions, then adds risk signals for interpretation. The five dimensions are: Account Identity Authenticity, Behavioral Density and Collaboration, Content Originality and Fingerprint, Social Relationship Depth, and Motivation and Interest Shift. Each dimension is scored from 0 to 2 points, for a total score of 0 to 10 points. Finally, the risk levels are segmented. And here's the key point: Out of 92 sample accounts, 0 fell within the confidence interval. The overall average score was 4.54, the median was 5.09, and the highest was 8.07. In other words, even the most human-like accounts didn't reach a 9. Threshold 1: If you only see the score, it's only suspicious. What's truly damaging is the closed-loop evidence chain. 1/ Image fingerprints and pHash directly match. Screenshots from 15 accounts supposedly from different regions and devices show a pHash Hamming distance of 0. The same upstream material was distributed across multiple accounts. 2/ Forgery is revealed in UI characters. Replacing the Latin letter 'o' with the Greek letter 'ο' is almost imperceptible to the naked eye. However, code extraction reveals the forgery. This method is closer to phishing and impersonation. 3/ Image material suspected of AI synthesis. Multi-model anti-AI detection scores range from 0.82 to 0.94; anything exceeding 0.80 is flagged. Combined with descriptions of lighting inconsistencies and abnormal facial features, this at least indicates the material's credibility is weak. 4/ Behavioral synchronization and scripted resonance. High-density similar content is published within the same time window, even using Poisson fitting with p less than 1e-6 for statistical judgment. Add to that typos and wording. The script structure is highly homogeneous; the narrative feels mass-generated rather than spontaneously generated. 5/ A concentrated trend of name changes and asset reuse emerges at the identity level. The report mentions that in January 2026, approximately 42% of accounts changed their names or bios, with name changes occurring in clusters. This synchronicity is extremely rare among organic users. A single advertising system is running the process. Unified materials, unified distribution, unified pacing, unified script. Too obvious, and the work isn't sophisticated. openai.study/html/report_en.ht...… openai.study/html/BIRDEYE_repo...… twitter.com/UnicornBitcoin/sta...
BNB
6.09%
loading indicator
Loading..