AI Hallucinations Are Becoming a Real Risk Can Mira Build the Verification Layer AI Is Missing?
AI hallucinations used to feel like a harmless quirk. You’d ask a model something, it would produce a polished answer, and occasionally that answer would contain something… invented. A fake citation. A statistic that didn’t exist. A confident explanation that sounded right but wasn’t. At first, it felt almost charming. Like catching a clever student bluffing through a question they didn’t fully understand. You’d correct it, shrug, and move on. But the longer I’ve watched AI systems evolve, the less amusing those moments feel. Because hallucinations aren’t rare edge cases. They’re structural. Large language models generate responses by predicting the most statistically probable continuation of text. They’re not consulting a live database of verified facts every time they answer. They’re estimating. And when the estimate is wrong, the delivery doesn’t change. Same tone. Same fluency. Same calm authority. That symmetry is what makes hallucinations dangerous. If an AI sounded uncertain when it guessed, most people would treat its answers more carefully. But it doesn’t. It sounds certain. And certainty carries weight. Right now, that dynamic is mostly manageable because humans remain directly involved in the loop. AI drafts something. A person reviews it. Mistakes get corrected before anything important happens. But that boundary is starting to blur. AI isn’t just assisting anymore. It’s being integrated into systems. Trading strategies. Compliance workflows. Customer service automation. Governance analysis. Places where outputs don’t just inform decisions they influence them. And once outputs begin triggering actions, hallucinations stop being funny. They become risk. That’s the context where Mira’s thesis started to make sense to me. Not as another attempt to combine AI and blockchain for narrative appeal. But as a response to a very specific gap. Verification. Right now, most AI pipelines assume that the model’s output is reliable enough to pass downstream. If a response contains an error, the expectation is that a human will eventually catch it. But what happens when that human layer disappears? What happens when autonomous agents start interacting with financial systems, executing transactions, or coordinating complex workflows? At that point, “trust the model” becomes a fragile assumption. Mira approaches the problem from a different angle. Instead of trying to eliminate hallucinations entirely which may not be realistic it treats AI outputs as something that needs to be verified before they can be trusted. When an AI produces an answer, Mira’s system decomposes that answer into smaller claims. Each claim can then be evaluated independently. Those claims are distributed across multiple AI models participating in the network, where each model verifies them from its own perspective. Agreement increases confidence. Disagreement becomes visible. That step alone changes the dynamic. Instead of a single model acting as the ultimate authority, the system relies on a process of cross-validation. Anyone familiar with decentralized systems will recognize that instinct. Crypto solved a similar trust problem years ago. You don’t rely on a single validator to maintain a blockchain. You rely on a network of validators that verify each other through consensus. You don’t assume honesty. You design incentives so that honesty is the rational behavior. Mira applies that same philosophy to information generated by AI. Verification isn’t just a background process. It’s enforced through incentives. Validators have stake in the network. Verification results can be recorded on-chain, creating a transparent and auditable trail of how confidence in a claim was established. The goal isn’t to create a perfectly accurate AI model. It’s to create a system where unreliable outputs become visible before they propagate into decision-making systems. Of course, there are still open questions. Verification layers introduce friction. Running multiple models to check claims requires compute resources. Consensus mechanisms add latency. For some applications especially those requiring real-time responses those trade-offs will matter. There’s also the question of model diversity. Cross-verification only works if the participating models are meaningfully independent. If they’re trained on similar datasets or share the same architectural biases agreement might simply reinforce the same blind spots. Consensus doesn’t automatically equal truth. But even with those challenges, the direction feels aligned with where AI systems are heading. As models become more integrated into financial infrastructure, governance processes, and autonomous systems, the cost of silent errors will increase. And historically, systems built on unverified assumptions tend to break once they scale. The internet had to build layers for verifying identity and security. Crypto had to build layers for verifying transactions and consensus. AI may now be reaching the stage where it needs a verification layer for information itself. Mira’s approach is one attempt at building that layer. Not by promising that AI will stop making mistakes. But by acknowledging that mistakes will happen — and designing a system that checks them before they matter. I’m not fully convinced that the model is perfect. Execution will matter. Network participation will matter. The economics of verification will matter. But the underlying question Mira raises is difficult to ignore. If AI is going to operate inside systems where decisions carry real consequences, can we afford to treat its outputs as self-verifying? Or do we need infrastructure that proves the answer before we trust it? Hallucinations used to feel like a minor inconvenience. The more AI systems scale, the more they start to look like a risk surface. And risk surfaces eventually demand guardrails. @Mira - Trust Layer of AI #Mira $MIRA
Sono stata nel crypto abbastanza a lungo da sapere quando qualcosa è diverso. Il Fabric Protocol è diverso.
Sono stata nel crypto abbastanza a lungo da sapere quando qualcosa è diverso. Il Fabric Protocol è diverso. Sono stata nel crypto abbastanza a lungo da riconoscere quando qualcosa sembra familiare. Non perché le idee siano identiche, raramente lo sono, ma perché i modelli si ripetono. Un nuovo racconto emerge, l'eccitazione cresce attorno ad esso e all'improvviso dozzine di progetti affermano di rappresentare il futuro inevitabile dello spazio. L'IA è diventata subito uno di quei racconti. Nel giro di pochi mesi, l'ecosistema si è riempito di protocolli che promettevano agenti autonomi, intelligenza decentralizzata, ragionamento on-chain e economie delle macchine. Il linguaggio era avvincente. Sembrava inevitabile. Ma più guardavi in profondità, più somigliava a una struttura familiare: modelli potenti che si trovavano dietro interfacce che erano tecnicamente “collegate” alla blockchain, mentre la vera decisione e l'esecuzione rimanevano off-chain e per lo più non verificabili.
Non mentirò, stavo quasi per avere il FOMO a $0.044. Sono contento di aver aspettato.
Il mio piano:
🎯 Entrata: $0.0415 - $0.0420 🛑 Stop Loss: $0.0398 💸 Take Profit 1: $0.0448 🚀 Take Profit 2: $0.0465
Perché questa configurazione? → Il volume è enorme (470M) → Crescita del 220% in 90 giorni → Si sta solo raffreddando dai massimi → Il rischio/ricompensa ha effettivamente senso
Utilizzando solo 5x. La leva mi spaventa, a dire il vero.
Non è un consiglio finanziario. Sto solo condividendo cosa sto facendo. Dyor.
@Fabric Foundation #ROBO $ROBO Non mi sono realmente proposto di ricercare la Fabric Foundation. Continuava semplicemente a comparire in alcune conversazioni su progetti infrastrutturali, quindi alla fine ho deciso di leggere un po' di più al riguardo. A volte i progetti che non cercano di fare la più forte entrata risultano essere i più interessanti da esplorare.
Ciò che mi ha colpito è il modo in cui Fabric parla di coordinamento. Non solo dal lato tecnologico, ma anche di come le persone coinvolte in un sistema decentralizzato rimangano allineate nel tempo. Cose come la struttura di governance, gli incentivi per i contributori e la partecipazione a lungo termine sembrano emergere abbastanza spesso. Quei temi potrebbero non creare molta eccitazione all'inizio, ma di solito diventano i pezzi più importanti in seguito.
Molte reti iniziano con un forte slancio. Tutti sono motivati nella fase iniziale. Ma dopo un po', mantenere tutti in movimento nella stessa direzione diventa la vera sfida. Fabric sembra pensare a questo fin dall'inizio piuttosto che aspettare che compaiano problemi.
Un altro piccolo dettaglio che ho notato è il tono attorno alla collaborazione. Non sembra che il progetto stia cercando di competere con ogni altro ecosistema là fuori. Invece, la direzione sembra assumere che reti diverse alla fine interagiranno e si supporteranno a vicenda in qualche modo. Questo sembra realistico considerando quanto lo spazio sta diventando interconnesso.
Certo, le idee da sole non provano nulla. L'attività degli sviluppatori reale e la partecipazione costante racconteranno la vera storia nel tempo. L'infrastruttura richiede pazienza.
Per ora, la Fabric Foundation semplicemente sembra stabile. Non affrettata, non eccessivamente drammatica, solo focalizzata sulla costruzione di qualcosa che possa reggere nel lungo periodo.
@Mira - Trust Layer of AI #Mira $MIRA I first heard about Mira during a discussion about AI and crypto, and at the time I didn’t pay much attention. There are already many projects trying to combine those two areas. But after reading a little more about it, the core idea actually stayed in my mind.
The main issue Mira seems to focus on is trust in AI outputs. AI systems can generate answers very quickly sometimes even convincingly. The problem is that speed doesn’t always mean accuracy. Anyone who has used AI tools long enough has probably seen responses that sound confident but aren’t fully correct.
That becomes more important when AI is used inside decentralized systems. If applications start relying on AI decisions or information, there needs to be some way to verify that the outputs are reliable. Otherwise mistakes can spread very easily.
From what I understand, Mira is exploring a model where verification doesn’t come from one central authority. Instead, different participants in the network help check whether the information produced by AI models is accurate. In theory that kind of approach could make AI-driven systems more trustworthy.
Of course, ideas like this are easier to describe than to implement. A lot depends on whether developers actually build applications around the system and whether the verification process works smoothly in practice.
Still, the concept itself feels relevant. AI is growing quickly, and blockchain systems are also evolving. Projects like Mira are trying to solve the problems that appear when those two technologies meet.
For now, it’s simply one of the projects I’m keeping an eye on while this space continues to develop.
Analysis: SOL remains in a bearish structure on the 4H timeframe after the rejection from 94 resistance. Price is trading below the short-term moving averages and struggling to hold above the MA(99) around 84. If 83 support breaks, the next downside move could target the 81–78 liquidity zone. 📉 Do you think SOL will bounce from the 84 zone, or are we heading toward 80 next? 🤔📊
Va bene, ingrandendo il frame di 1 ora. Quel movimento che abbiamo visto era brutto, ma ora stiamo solo osservando come respira.
$BEAT
Il Segnale (Long): Non lo sto inseguendo qui a $0.3487. Se compri in questo momento, stai comprando l'hype. Voglio vedere prima un leggero ritracciamento. Cerca di vedere il prezzo scendere e mantenersi intorno alla zona $0.3400 - $0.3380. Se rimbalza da quell'area con una piccola candela verde, quella è la conferma che gli acquirenti sono ancora in controllo e stanno solo facendo uscire le mani deboli.
Lo Stop Loss (SL): Se entri in quella zona, devi mantenere lo stop stretto. Mettilo appena sotto il minimo dell'ultimo swing a $0.3320. Se rompe quello, la struttura è rotta e sei fuori con un piccolo graffio. Non c'è bisogno di tenere un bag.
Il Take Profit (TP): Questa è la parte difficile perché la moneta è volatile. Non diventerei avido.
TP 1 (Scalp): $0.3560. Questo è il livello di resistenza che abbiamo individuato prima. Se tocca questo e esitano, prendi il 4-5% e scappa. TP 2 (Runner): $0.3740. Questo è il massimo delle 24 ore. Lascia andare un runner qui, ma sposta il tuo stop a pareggio una volta che TP1 tocca così stai giocando con i soldi della casa.
Il Controllo della Realtà: Non innamorarti del trade. Se tocca lo stop, tocca lo stop. Ci sarà un altro setup tra un'ora. Il mercato non si preoccupa del tuo ingresso.
$UAI ha avuto un potente rally dalla regione 0.21 e ha spinto fino a un massimo vicino a 0.377. Dopo quella rapida espansione, il mercato ha iniziato a perdere slancio vicino al massimo e le candele hanno cominciato a stampare massimi più bassi. L'ultima mossa mostra un chiaro rifiuto dalla zona superiore e il prezzo ora sta scendendo dal picco locale.
Questo tipo di struttura di solito appare quando i primi acquirenti iniziano a prendere profitto dopo un rapido movimento. A meno che il mercato non recuperi rapidamente l'intervallo superiore, il grafico potrebbe continuare a tornare verso l'area di supporto precedente.
Bias di Trading: SHORT Zona di Entrata: 0.3300 – 0.3380 Take-Profit 1: 0.3120 Take-Profit 2: 0.2890 Take-Profit 3: 0.2600 Stop-Loss: 0.3570
Finché UAI rimane al di sotto della recente zona di rifiuto intorno a 0.35, la pressione a breve termine può rimanere al ribasso. Se gli acquirenti riescono a spingere il prezzo sopra quel livello, l'idea del ritracciamento si indebolirebbe e il mercato potrebbe tentare un'altra corsa verso i massimi. #AltcoinSeasonTalkTwoYearLow #MarketPullback #USIranWarEscalation
A few years ago, when people talked about AI infrastructure, it basically meant one thing.
A few years ago, when people talked about infrastructure in crypto, it usually meant one thing. Throughput. Faster chains. Cheaper transactions. More scalable blockspace. Every new architecture promised to push the same frontier higher TPS, lower latency, more efficient execution. And for a while, that made sense. Blockchains were struggling under demand. Fees spiked, transactions slowed, and scalability became the central problem everyone wanted to solve. But over time, something interesting happened. Execution improved. Layer 2s appeared. Rollups matured. Parallelization entered the conversation. Suddenly blockspace wasn’t the only constraint people were thinking about. The bottleneck started to move. Not to computation. To capital. Because even with faster chains and better protocols, one thing kept repeating across DeFi markets: Liquidity appeared quickly… and disappeared just as quickly. Protocols launched. Incentives attracted capital. TVL climbed. Then incentives changed, yields dropped, or volatility increased and liquidity rotated somewhere else. At first, it looked like normal market behavior. But after watching enough cycles, the pattern becomes harder to ignore. The issue isn’t that DeFi lacks capital. It’s that capital isn’t coordinated very well. Most liquidity today behaves like a migratory flow. It moves wherever short-term yield looks most attractive. It’s efficient, rational, and extremely mobile. But mobility isn’t the same thing as alignment. Protocols need capital they can rely on. Liquidity providers want capital they can move. Both goals make sense individually. But together, they create a system where liquidity is constantly being rented rather than committed. That’s where Fabric Foundation started to catch my attention. Not because it’s promising another venue for trading or another yield mechanism. But because it’s exploring a deeper layer of the stack how capital itself is coordinated across DeFi. That’s a different kind of infrastructure conversation. Instead of asking how transactions execute faster, Fabric seems to be asking how liquidity can become more intentional. Because right now, DeFi capital formation is mostly reactive. Incentives appear → liquidity arrives. Incentives fade → liquidity leaves. It works, but it’s fragile. During calm markets, the system feels liquid. During stress, you realize much of that liquidity was temporary. Fabric’s approach at least how I understand it revolves around redesigning the structures that connect liquidity providers with protocols. Not through endless emission wars. Not through short-term liquidity mining cycles. But through mechanisms that encourage longer-term participation and stronger alignment between capital and the systems it supports. That’s subtle. And subtle infrastructure rarely trends on timelines. But it matters. If you zoom out, DeFi has built incredibly sophisticated market primitives. Automated market makers. Lending markets. Derivatives platforms. Structured yield strategies. Yet the capital layer feeding those systems is still largely governed by incentives designed during the earliest phases of DeFi. Liquidity mining was powerful when the ecosystem was small. Today it often creates a kind of musical chairs dynamic, where capital rotates rapidly across opportunities without building durable depth anywhere. Fabric seems to be exploring whether DeFi can evolve past that stage. Not by restricting liquidity, but by coordinating it more intelligently. That raises difficult questions. Crypto participants value flexibility. Capital that feels locked or restricted can discourage participation. At the same time, capital that moves too freely can destabilize the very markets it supports. The balance between flexibility and commitment is delicate. And that’s the challenge any capital coordination layer will face. Another open question is composability. DeFi works because assets can move freely between protocols. If capital coordination becomes more structured, does that enhance composability by stabilizing liquidity? Or does it introduce new friction? It’s still early to know. But the idea itself points to something the industry has slowly started recognizing: Liquidity size and liquidity reliability are not the same thing. A protocol can show large TVL numbers and still struggle to maintain depth when markets become volatile. Reliable capital capital that stays during difficult conditions is far more valuable than temporary inflows chasing incentives. Fabric’s ambition seems to sit in that gap. Not increasing the total amount of capital in DeFi. But improving how that capital behaves. That’s infrastructure work. And infrastructure tends to look quiet until the moment it becomes essential. Clearing systems. Settlement rails. Oracle networks. All of them seemed abstract early on. Until the day markets depended on them. Whether Fabric ultimately succeeds depends on execution. Capital coordination is one of the hardest design problems in decentralized systems. Markets reward short-term optimization. Protocols need long-term stability. Bridging that gap requires incentive structures that make aligned behavior economically rational. Not idealistic. But if DeFi eventually matures into something closer to global financial infrastructure, the capital layer will probably have to evolve beyond constant liquidity mining cycles. And that’s why Fabric is interesting to watch. Not because it promises the next hype cycle. But because it’s exploring a quieter question: What if the real bottleneck in DeFi isn’t blockspace anymore? What if it’s capital coordination? If that turns out to be true, the next generation of infrastructure might not focus on faster execution. It might focus on smarter liquidity. @Fabric Foundation #ROBO $ROBO
When I first started reading about @Fabric Foundation it didn’t immediately feel like one of those projects trying to grab attention. It actually felt a bit quieter than most things in this space. The focus seems to be less about headlines and more about how decentralized systems stay organized once the early excitement fades.
A lot of networks launch with strong momentum. Everyone is motivated at the beginning. But after some time, things can drift. Contributors lose alignment, governance gets complicated, and incentives stop matching the original goals. From what I can see, Fabric appears to be thinking about those problems from the start rather than trying to fix them later.
Another thing I noticed is the overall tone of the project. It doesn’t sound like it wants to compete with every other ecosystem. Instead the direction feels more cooperative. The way it’s described makes it seem like different networks will eventually need to interact and support each other instead of existing in completely separate worlds.
Of course, none of this guarantees success. Infrastructure projects always need time before their real value becomes clear. What will matter most is whether developers continue building and whether people actually rely on the system.
For now though, Fabric Foundation comes across as something built with patience in mind. Not rushed, not overly loud just trying to create a structure that can stay stable over the long run. #ROBO $ROBO
I’ve been around crypto long enough to notice a pattern.
I’ve been around crypto long enough to notice a pattern. Every cycle, there’s a wave of projects promising to “fix everything.” New chains that will solve scalability forever. Protocols that will eliminate risk. Tokens that somehow turn complexity into certainty. For a while, the narratives sound convincing. Then reality catches up. Markets stress systems. Edge cases appear. Assumptions break. And suddenly the things that looked like minor details become the entire story. Crypto has a way of humbling confident ideas. That pattern is one of the reasons I’ve been watching the intersection of AI and blockchain with a healthy amount of skepticism. Whenever I hear “AI + crypto,” my first instinct is usually to step back. Most of the time it feels like two powerful narratives duct-taped together. But every once in a while, a project isn’t trying to sell a story. It’s trying to solve a structural problem. That’s the lens through which I started looking at #Mira . Not as another AI project. But as a response to something anyone who uses AI regularly has probably noticed by now. AI is very good at sounding right. Even when it isn’t. I’ve lost count of how many times I’ve asked a model something fairly specific, received a beautifully structured answer, and then discovered later that part of it was just… wrong. Not malicious. Not intentionally misleading. Just a confident hallucination. That’s the strange thing about modern AI systems. They don’t express doubt the way humans do. When people aren’t sure, they hedge. “I think.” “Maybe.” “I’m not completely certain.” AI rarely does that. It produces the most statistically likely answer and delivers it with the same calm authority every time. And the better the language models get, the harder it becomes to detect those mistakes. Fluency hides uncertainty. For casual use cases, that’s manageable. If an AI drafts a tweet incorrectly, you edit it. If it misinterprets a paragraph in a research summary, you double-check the source. But things start to look different once AI moves into systems where outputs trigger actions. Tools for analyzing finances. Agents that trade automatically. Compliance systems. Governance assistants. In those environments, mistakes don’t just get corrected. They propagate. That’s when the question changes from “How smart is this model?” to “Who verifies what it says?” And that’s where Mira’s approach becomes interesting. The idea is surprisingly simple. Instead of trusting a single AI model’s answer, the system treats that answer as something that needs to be checked. When an output is generated, it gets broken down into smaller claims. Each claim can then be evaluated independently. Those claims are distributed across multiple AI models in the network, each of which verifies them from its own perspective. Agreement increases confidence. Disagreement surfaces uncertainty. It’s a process that feels oddly familiar if you’ve spent time in decentralized systems. Because crypto solved a similar problem years ago. You don’t trust one validator to maintain a blockchain. You trust a network of validators that cross-check each other through consensus. You don’t assume honesty. You design incentives that make dishonesty expensive. Mira applies that same philosophy to information produced by AI. Instead of relying on a single model’s authority, it creates a verification process where multiple participants evaluate the output before it becomes something the system treats as reliable. The blockchain element introduces transparency. Verification results can be recorded, audited, and traced. That means the system doesn’t just produce an answer. It produces an answer with a measurable level of verification behind it. Of course, there are still open questions. Running multiple models to verify information introduces cost. Consensus mechanisms introduce latency. For developers building real-time systems, those trade-offs matter. There’s also the issue of diversity. If the verifying models share similar training data or architectural biases consensus could simply reflect shared blind spots. Agreement doesn’t always equal truth. But even with those limitations, the direction feels aligned with how complex systems tend to evolve. Infrastructure rarely starts with hype. It starts with a problem that becomes impossible to ignore. In this case, the problem isn’t that AI makes mistakes. Humans do too. The problem is that AI mistakes are delivered with confidence and scale quickly once integrated into automated systems. And once that happens, relying on a single model as the ultimate authority becomes fragile. Crypto learned long ago that single points of failure don’t age well. The strongest networks distribute responsibility. They allow participants to verify each other. They replace trust in individuals with trust in processes. Mira seems to be applying that same logic to AI. Not by claiming models will become perfect. But by assuming they won’t — and building a system that checks their work. I’ve seen enough cycles to know that the most important pieces of infrastructure often look boring early on. They’re not flashy. They’re not designed to dominate headlines. They quietly solve problems that become obvious only after enough people run into them. AI hallucinations feel like one of those problems. Right now they’re mostly an annoyance. But if AI continues moving deeper into decision-making systems, the need for verification will only grow. And if crypto has taught us anything, it’s this: Systems built on unverified assumptions eventually run into reality. The interesting question is whether verification layers like Mira become part of the default architecture before that happens. I’m not ready to claim they will. But I’ve been around long enough to recognize the pattern. The projects that last usually start by fixing something subtle that everyone else ignored. @Mira - Trust Layer of AI $MIRA
@Mira - Trust Layer of AI I recently came across a project called Mira while reading about how AI tools might fit into the crypto space. At first I didn’t think much of it, but the idea kept coming back to me because it touches on a real issue people don’t talk about enough.
AI can generate a lot of useful information, but the bigger question is always the same: can we trust it? Models can produce answers quickly, yet verifying whether those answers are correct is often difficult. This becomes even more complicated when AI tools are used inside decentralized applications.
From what I understand, #Mira is trying to approach that problem from a different angle. Instead of depending on one source to confirm whether AI outputs are reliable, the project explores a decentralized way of validating them. In simple terms, multiple participants in the network help evaluate whether an AI result is accurate.
The idea is interesting because it introduces an extra layer of accountability. If Web3 applications start relying more heavily on AI systems in the future, having some form of verification could become very important. Without it, misinformation or unreliable outputs could spread quickly.
At the same time, this is still an early concept and there are many open questions. Technology like this only proves itself once real developers start using it and real users interact with it. Adoption will ultimately decide whether the idea works in practice.
For now, $MIRA is simply one of those projects that is worth watching as the conversation around AI and blockchain continues to evolve.
$SIGN had a very strong breakout earlier that pushed price from the 0.034 region to a peak near 0.0537. After tagging that high, momentum slowed and the candles started showing rejection around the top. The recent pullback suggests some early buyers are locking in profits after the sharp run. Right now price is hovering around the 0.049 area, and the structure looks like a short-term cooling phase after the spike. When a coin rallies this fast, it often revisits nearby support before deciding on the next move.
As long as SIGN stays below the recent high zone around 0.053–0.054, the chart may continue drifting lower toward support levels. If buyers manage to reclaim that high area, the bearish idea would lose strength and the trend could resume upward.
Analysis: XRP is showing weakening momentum on the 4H timeframe after rejection from 1.47 resistance. Price is now trading around the moving averages and forming lower highs, which signals seller pressure. If 1.39 support breaks, the next downside move toward the 1.34–1.30 liquidity zone becomes likely. 📉
Analysis: ETH is showing bearish momentum on the 4H timeframe after a strong rejection from 2199 resistance. Price is forming lower highs and currently trading below the short-term moving average, indicating weakening buyer strength. If 2050 support breaks, it could trigger a move toward the 2000–1940 liquidity zone. 📉
Analisi: ZEC rimane in una struttura ribassista sul timeframe di 4H dopo il rifiuto dalla resistenza di 251. Il prezzo sta scambiando al di sotto delle medie mobili a breve termine e continua a formare massimi decrescenti. Se il supporto di 222 viene rotto con slancio, il prossimo movimento al ribasso verso la zona di liquidità 215–205 è probabile. 📉