Security vs Trustlessness, how to choose the mechanism of ZK protocol

This article is machine translated
Show original

Recently, zkSync and Polygon have launched their own zkEVM, which has set off a wave of enthusiasm in the industry. At the same time, the community has also had a lot of discussions about the security and decentralization of zkEVM. During the recently concluded ETHDenver, IOSG held a security-themed event (Stay SAFU, Security Day), and was fortunate to invite leading projects in the field of zero-knowledge proof to participate in the discussion. They shared the security principles and novel solutions of the zero-knowledge proof protocol in mechanism design and engineering construction, as well as the various trade-offs in the design process.

The following are the insights shared by the guests. The participants are:

Queenie Wu, Partner, IOSG Ventures (Moderator);

Alex Gluchowski, co-founder of zkSync;

Ye Zhang, co-founder and head of research at Scroll;

Matt Finestone, Chief Operating Officer, Taiko;

Mikhail Komarov, founder of Nil foundation;

Brian R, founder of Risc0;

Q1: How does zero-knowledge proof enhance the security of the system you are building? On the other hand, what security issues will the deployment of zero-knowledge proof bring?

[Brian R]

I'm Brian Redford, CEO of Risk Zero. We are the developers of Risk Zero zkVM, built on Risk-V Micro Architecture, which can execute arbitrary code in ZK systems. We are also deploying a Layer2 network called Bonsai, which can execute any code in ZK scenarios. You can think of it as a ZK accelerator. In terms of how ZK can enhance security, I think it depends on the specific application scenario. Of course, being able to perform calculations and generate proofs that can be verified anywhere in the world completely changes the paradigm of blockchain security. You no longer need to redo the same calculations over and over again, and then use complex mechanisms (such as economic mechanisms) to ensure the security of the entire system.

[Mikhail Komarov]

I am Mikhail Komarov, the founder of Nil. We provide infrastructure for ZK projects under development, such as the zk-EVM compiler. This compiler can compile high-level languages into circuits, so that every calculation defined in the high-level language can be proved without performing any operations, only simple circuits are needed. In addition, we have introduced the concept of a "Proof Market", which provides a decentralized bidding market for projects that want to generate zkSNARK/STARK proofs. Each developer can submit a bid for the zero-knowledge proof that needs to be generated, and then only needs to call the proof in the application to obtain the required service (for example, zkRollup can use the proof market).

Basically, we are the infrastructure that developers need. It doesn't enhance our security on its own, but it does enhance security as a whole. As Brian said, it enhances security by removing the trust assumptions in the protocol that should run in a Trustless environment. The key to it is to further reduce the trust assumptions. That's how it enhances security. I believe that if zero-knowledge proofs were used by projects, some of the security incidents that happened last year might have been avoided.

[Matt Finestone]

I'm Matt from Taiko, and we are an Ethereum-compatible ZK Rollup. We pursue maximum Ethereum and EVM compatibility. What makes us unique in terms of security is that we rely heavily on well-tested and proven Ethereum building blocks, clients, and smart contract patterns. As Mikhail said, ZK reduces trust assumptions or moves them to the Protocol/proof level. "What needs to be trusted" is no longer a few motivated people, but mathematics and protocols and applications developed around mathematical proofs. There are many security considerations for ZK Rollup, not just ZK.

I think we reuse as much of the secure parts from Ethereum as possible to keep it secure. ZK will be a very powerful system over time and battle tested.

[Ye Zhang]

Hi, my name is Ye. I work at Scroll. Let me first give you a brief introduction to Scroll. Scroll is a scaling solution for Ethereum. It is highly compatible with Ethereum, users can interact with applications, and developers can deploy smart contracts by simply copying and pasting the code and migrating it to Scroll. It is faster and cheaper than Ethereum, with higher throughput and stricter security. We will decentralize our proof system (Decentralized Prover Network) to prevent single point failure of the proof system. This is our first step towards decentralization, because ZK Rollup will be decentralized for quite some time to come. Even if you have great trust in mathematics and cryptography, you may still encounter this single point of failure because you have to rely on a prover.

The first step to decentralization is to want to decentralize the proof system to make it more reliable. As for the security of ZK, I think other guests have introduced that ZK brings you very strong public verifiability properties. Basically, anyone can do the calculation and execute the proof, and then anyone can verify this proof and get the same guarantee. If you have millions of nodes, everyone only needs to re-execute the verification algorithm instead of the initial calculation, which also achieves the scalability of the system. This is the power of ZK technology. As for the problems it may have, if our system relies entirely on mathematical operations, if there are some errors, such as missing some constraints, it may be dangerous. This is why we take multiple approaches to improve our security. For example, we adopt a community-driven approach to development. From day one, we have open-sourced all development processes and have been audited by Ethereum and our own community, which is a higher standard. This is how we minimize trust and improve security.

[Alex Gluchowski]

My name is Alex Gluchowski, I'm the CEO of Matter Labs, and we're the company behind the zkSync protocol. We're building the zkSync Era Network, which is a ZK Rollup that's compatible with the ZK EVM. We're taking a slightly different approach to the EVM-equivalent Rollup. We believe in taking a pragmatic approach, starting with something that's compatible so that developers can easily plug in and port existing applications and launch from existing tools. However, the ultimate ZK environment is different. If you tie yourself to legacy technology, it's going to be hard to achieve the maximum capacity of the ZK system. This is important because our mission is to scale blockchain to real-world scale, to bring the next billion users to blockchain, and to create a new Internet of Value. If you think about millions or even billions of users, you really want to get costs down because as all those billions of transactions add up, costs are going to become very significant.

How does this impact how we enhance security? That's a very interesting question. When you ask how to improve security or any other factor, you want to compare alternatives, right? What is my baseline? What are the alternatives to using ZK? Alternatives might be other scaling technologies that were available before ZK Rollup, such as Optimistic Rollup, sidechains, Plasma, etc. These solutions introduce new trust assumptions. If our goal is to scale to a billion users, and our mission is not just to scale throughput, but to scale value while maintaining self-sovereignty, self-custody, permissionless nature, and completely trustless system characteristics, then this can only be achieved using ZK.

Q2: When we compare different types of zkEVM, we usually focus on their scalability and compatibility (Vitalik made a detailed comparison: https://vitalik.ca/general/2022/08/04/zkevm.html). If we add a security dimension here, how will zkSync Era, Scroll, and Taiko compare the different potential security risks that may arise from different mechanism designs?

[Alex Gluchowski]

As the previous speaker mentioned, you need to implicitly trust many components of these complex systems to ensure security, for example, you trust the code produced by the compiler, you think it is executing the code logic you put in this compiler. Why would you trust it is Solidity? It is not formally defined, so you just trust that the compiler behaves correctly in each version. We think this is a problem that must be solved. That's why we started to build a compiler based on the LLVM framework, which supports Solidity as one of the front ends and relies on this mature framework. There are many tools for static analysis of code, security checks, etc., and the back end is our zkVM virtual machine. We can also support other more mature languages, which have actually been used in other security environments, such as Rust, or some newer languages, which have added security considerations, such as Move, which can avoid some security issues, such as double spending. In short, although it is more complicated, we must solve it from different levels.

[Ye Zhang]

I want to talk about some different approaches and the background. We are building a compatibility solution at the EVM Bytecode level. Basically, we can be compatible with the EVM Bytecode. This is different from the way zkSync is built, and we also believe that compilers should not be trusted. That's why we believe in the Solidity compiler, although it is immature, but in the context of blockchain, Solidity is relatively mature. No one has used Solidity and LLVM before. We believe that is a better standard because it has been tested in practice, and a lot of smart contracts DeFi have been tested in practice by Solidity developers. That's why we believe that adhering to this compiler standard, adhering to the Solidity compiler standard, and adhering to the definition of the EVM Yellow Paper is the best way to ensure the security of the system. Because from the circuit side, we don't need to consider the compiler side, we don't need to build our own compiler, we just take the existing infrastructure and prove that it executes correctly.

We would rather focus the complexity of system construction on solving the compatibility of zkEVM at the Bytecode level, rather than building a compiler and a backend that supports LLVM at the same time. We don't want to build a compiler in addition to building zkEVM. Another consideration is that we definitely care about the developer experience. Layer2 is built to extend the EVM, and the current EVM has become overcrowded with a large amount of Solidity code and applications. We hope that developers can migrate to our system seamlessly while ensuring security. This is why we currently do not plan to add more fancy features to the EVM.

Following this standard makes Ethereum truly scalable while ensuring optimal performance and timely delivery of systems built on top of it. At the same time, we are also promoting various open source implementations on Ethereum, including Type 1 and Type 2 zkEVM, including privacy and scalability. We have been building in an open source way since day one. We are very concerned about the development and evolution of Ethereum zkEVM, we lead half of the development, and we are part of the team, so we know exactly how long it will take for the entire system to be truly ready. That's why we take this approach to prepare the product and go deep into the community, and then think about how to drive Ethereum's ultimate goal.

[Matt Finestone]

These are two good answers, Taiko and Scroll are closer, and we are not introducing a new compiler. I like what Alex said, which is what is a safe alternative in the context of blockchain? I think we would all agree that Ethereum is probably the gold standard. We implemented it according to the yellow paper, reused Ethereum instead of adjusting its components, and even Ethereum components outside of the EVM, data storage structures, etc. are proven in practice.

Of course, there are always tradeoffs here. Alex talked about a billion users and ultra-low costs and scalability to maintain value. We may make higher sacrifices in terms of proving costs, but we stick to the battle-tested EVM standards and Ethereum standards. And we also make tradeoffs for some of the considerations that Ye Zhang talked about about practicality and fast time to market.

In the context of ZK, there are some directions that are not easy to implement, such as some hash functions or some data storage structures. We don't change these things because we are not sure how they will work, such as changing the Merkel Patricia Tree to a Verkle Tree, even though this is on the Ethereum roadmap itself. We are more confident in components that have been tested and battle-tested. The complexity of the system does not lie in trying to reinvent Ethereum's EVM and other components, but in how to fully deploy ZK to be compatible with EVM. This will take longer to complete, and we need longer than Scroll to make some trade-offs to achieve usability. In terms of security, our implementation path will be more reassuring.

[Mikhail Komarov]

Ethereum is battle-tested, so let's reuse all of these systems and make fewer new assumptions. But there are several security issues that few people have really thought about, and our goal is to solve them. The first problem is that you have to trust the compiler. There is another problem, which is that if you want to achieve full EVM compatibility, such as Type 1 EVM compatibility, then you need to manually redeploy any Opcode of the EVM in the circuit by figuring out what the form of an expression as a circuit is on a certain domain, which is a manual process and very complex and easy to get wrong. We have done this ourselves and screwed up the circuit, so we know it is bad.

In order to not repeat these problems and not let anyone make these mistakes, we are working on removing this security assumption by allowing people to compile circuits using an already battle-tested EVM implementation, rather than reimplementing all the OpCode manually. Our goal is to have minimal security assumptions by compiling it using the LLVM compiler, rather than reimplementing it manually. This is another security assumption that needs to be removed, and we will address it for zkEVM.

[Brian R]

You can run geth on a system like RiscV to solve the problem Mikhail is talking about. We actually just added support for Go. We built and designed the RiscZero VM, and part of the reason we chose the RiscV instruction set is because it has a formal definition and is very lightweight. The security scope of RiscV circuits is specified, and a lot of work has been done to incorporate formal verification methods to prove that an implementation conforms to the RiscV specification. We focus on ensuring that the cryptography in this simple system is correct, and then running the EVM on it actually works. Of course, there is a performance loss with this approach. For example, it takes about 1 minute to transfer an ERC20Token.

Q3: As Alex just mentioned, upgrading or choosing another solution is possible for any part of the system. So how do you ensure the upgradeability of the system and do it in a very safe way?

[Brian R]

Yeah, I think in ZK, upgradability is a very important topic. From our perspective, without a deployed network and a lot of economic value behind it, we spend a lot of time making sure we are building the right abstractions of the technology stack. We can switch hash functions, switch finite fields and proof systems, or add new technologies, such as PLONK, to the technology stack. This is also another reason why we chose RiscV as the main supported "instruction set", because it is a very clear abstract system in itself. So you can replace anything at will. LLVM obviously has very similar characteristics.

[Matt Finestone]

Yes, upgrading is a big topic, and we can think of it as a question of trusting the system. The implementation of the system may have vulnerabilities, and users are at risk, or they have to trust the people who built the system or some participants who may take advantage of the situation, etc. Upgrading is a balance between security and trustlessness at some level. As your trust in the system increases, you can remove some of these trusted participants. Here, we should be very cautious about some of the trusted participants because the purpose of doing these things is to eliminate these participants. For these very complex systems, it is best to intervene early. I think Alex and the Matter Labs team have provided some good reference cases in this regard. They have a good safety committee and time delay mechanism.

So what is the right pace for upgrading? This is a very important question, and I don't know if more users will feel comfortable with a fully trustless system, because such systems are often very complex and introduce a lot of new things, or trust these well-intentioned participants. This is a very human issue, and of course there are some technical solutions, such as multi-proof may be a good choice. I think it is possible that we can reuse some designs from components similar to Optimism. If there is a problem with our validity proof, then reusing the implementation of Optimistic Rollups will make it easier to formulate a system of fraud proofs to make it suitable for the Ethereum equivalent environment. You can mix and match fraud proofs and validity proofs, and if there is any objection, then upgradeability or some type of governance solution can cover it.

[Mikhail Komarov]

Let me say this. I just spent some time thinking about this. I'm afraid I'm not understanding the question because I want to say where are the upgrade issues? Just rebuild the circuit. So, what are the upgrade issues?

[Ye Zhang]

From our perspective, first of all, you certainly can't just compile new circuits because it affects your proving keys, verification keys, and many on-chain smart contracts, so you certainly can't do it very often. We're thinking about multi-proof approaches, adding mechanisms such as double verification. There are multiple ways to solve this problem, except that as Matt mentioned, we are not considering combining directly with optimistic fraud proofs because it will make the final confirmation time longer. We are exploring some other methods and will soon put forward some proposals in Ethereum research on how to add some additional guarantees.

For example, Justin Drake proposed using Intel SGX (TEE environment) as a way to provide some additional guarantees, strictly increasing security guarantees. In addition, there may be some other governance methods, and we think that security committees and time delays are good ways. We are also thinking about this issue. This is a trade-off, and I believe that most Rollups will still take longer to really get rid of this upgradeability problem because upgrading the system is a long-term thing. We are paying close attention to and studying this issue.

[Alex Gluchowski]

I can give some background information on why upgradability is an important issue. For example, for any program running on your desktop, you just download the new version and install it, right? What's the problem with upgrading? The problem is that in the context of blockchain, we are trying to build trustless systems, but in some cases, the need to upgrade may undermine that trust. For Layer 1, there is no such problem. If you want to upgrade Ethereum, you just download the new client, install it, and then everyone coordinates a fork.

Then we schedule a fork, set a date, fork at a certain block number, and anyone who doesn't like this upgrade can stay on the old branch of the old version. This upgradeability path is completely trustless. It doesn't make you rely on any honest majority or any trusted participants.

The problem arises in the context of Layer 2. If we are building a Rollup, Rollup relies on a smart contract in Layer 1. This smart contract may be immutable, with certain fixed functions and verification keys for certain specific circuits already built in, then the problem is that if there is a bug, then you are helpless. So what should you do if you are faced with a bug, or if you want to fix it?

We disclosed a vulnerability in zkSync 1.0 (zkSync Lite) with a time lock for upgradability so the team could propose an upgrade proposal for a new version. Then, if users didn't like the new version, all users had a few weeks to withdraw their assets from Layer1. We had a Trustless mechanism to achieve the exit. But because we were forced into this time lock, we couldn't fix it, so we came up with a compromise and introduced what we call the Security Council, which is an independent committee. We invited 15 well-known members of the Ethereum community from different communities and different projects to join.

The team does not control the contract and can only propose an upgrade plan. The Security Council makes the decision and can decide to accelerate the upgrade. But this is still not the best option, because in theory there is still a group of people who can immediately install a malicious version during this period. Maybe they don't want to do this, but maybe they will be forced by some participants, and we can't rule out this possibility. Therefore, if we want to make full use of zero-knowledge proofs and rely only on mathematics and open source code instead of any trusted parties of verification procedures, etc., we can eventually achieve a completely Trustless mechanism.

We are currently thinking of a better solution, where the team proposes a time-locked upgrade proposal, and the Security Council can step in to suggest freezing the smart contract, and then soft fork on Layer 1. Therefore, this requires coordination with Layer 1, which requires the Layer 2 protocol to have a certain scale and be important enough for the community to actually fork, install new versions, etc. Because Layer 1 cannot do this for every small protocol, it must be an important protocol like a system-level thing on Ethereum.

This is the best mechanism we have right now to enhance trustless upgradeability so that we are protected from serious vulnerabilities in the first layer. But this still introduces some issues like timeliness. If such an issue occurs, the protocol will be suspended for a period of time. Imagine that we have switched from Visa and PayPal to using this large Rollups for blockchain payments, and suddenly user funds are frozen and no one can make payments. It takes us a few days to coordinate upgrades. This is a huge problem. We currently don't have a better solution, nor do we see a better solution. If you have an idea, please contact us and let's discuss it further.

Q4: There is a keyword that has been mentioned many times, that is "Trustless". As we know, the most important component of the current system is still centralized. What security challenges will we face in the evolution from centralization to decentralization?

[Alex Gluchowski]

I think it will enhance security. It will provide us with an extra layer of protection. First, ZK Rollup has to provide a proof of validity for each block, but it can have problems, for example, maybe we forget some constraints. On top of that, we also need to provide signatures through the proof-of-stake consensus mechanism, which is an extra layer of protection. Because in order to destroy the system, a malicious attacker must first find a vulnerability, and then must work with the majority of these validators to do evil together.

This is unlikely, because the attacker would either already be in control of the blockchain, or would have to buy a large amount of tokens, which gives us enough time for someone else to find the same vulnerability and submit it to Immunefi or somewhere else, so the team can fix it. Or, maybe we will run some honeypots at the same time, which are completely open and anyone can crack and get rewards from them. So, overall, this provides two factors of protection for the entire system. And, we can add more factors on top of this.

So far, I would not trust a ZK Rollup that claims to be completely trustless to be safe. It would be extremely risky for me. I would not put a larger amount of assets on such a ZK Rollup than I can afford to lose.

My favorite example is the Boeing 737 Max incident. The cause of this accident was not the software problem that they tried to divert public attention from, but because they relied on a single sensor on the aircraft, which was completely irresponsible. The aviation industry has a long history of development, with many technological iterations in the process, and it is a consensus in the industry that you cannot rely on a single system. But because they sacrificed safe system design for various reasons (such as cost, delivery time, etc.) during the production of the Boeing 737 max, it eventually led to the accident. Therefore, we always hope to have at least two completely independent safety factors to reduce the probability of failure.

[Ye Zhang]

We think about the decentralized roadmap of ZK Rollup with a long-termist philosophy. We have our own ideas on whether to decentralize Sequencer or Prover first, and even how to define the decentralization of ZK Rollup. I think that in the end we will decentralize both Sequencer and Prover. But we have some slightly different priorities, and we want to decentralize Prover first. Security is definitely one of the important reasons. If the Sequencer is decentralized first, then before zkEVM becomes very mature and robust, if someone really finds some vulnerabilities and submits false proofs, there is a certain probability that it will be accepted and blocked by the Sequencer, causing damage to the system.

Therefore, we will keep the centralized sequencer first. Because zkEVM is likely to have vulnerabilities, because it is a very complex system. Therefore, we hope that at least in the early stages, we can control the centralized sorting and at least ensure the correct and effective block generation.

Another reason to decentralize Prover first is that there are many hardware companies looking for solutions to make zkEVM more efficient. If we commit to decentralizing Prover, they will be involved in optimizing the code for the system. We all know that ZK ASICs may take more than a year to come out, and if we decentralize Prover first, they will be more motivated to build for our system and make it more efficient. Decentralizing Sequencer is something we plan to do later.

There is a more complex factor to consider here. If Prover and Sequencer are assigned to two different groups, the incentive scheme needs to be designed very carefully. For example, the proportion of rewards allocated to the two parties can be reasonable enough and balance the incentives of the two parties.

In addition to this, we have some other security measures. For example, the way we are building is open source, and we are doing some internal security audits, not just external audits. We have a very strong security team. We provide various funding to encourage more people to participate in the construction of security solutions, such as tools such as formal verification. Our team has also found vulnerabilities in the circuits of Consensys ZKEVM and Aztec. We are trying to improve the security of the entire ecosystem.

[Matt Finestone]

Taiko may face this challenge earlier. Although everyone has a certain degree of decentralization, we are actually planning to keep the same approach as Ethereum, including EVM, Gas schedule and state tree, etc., and also consider Ethos and other decentralized (we call it) Sequencer Proposer, and Prover. In the first testnet a few months ago, about 2,000 independent individuals or addresses proposed a block without permission. Although there may be some malicious blocks, this is also a commitment to decentralization. I think this is not a gradual decentralization, but perhaps a gradual efficiency improvement, because you have to give up some efficiency. Proposer may build the same block, resulting in some transaction redundancy, and also paying some valuable block space that paid ETH to Layer1. Some people will get refunds, and some will be skipped directly.

It is not realistic for us to achieve decentralization immediately in the next upcoming testnet. Permissionless provers are harder in a testnet environment because of Sybil attacks where people fill the proposed block with spam and the Prover has to spend real computing resources to prove them but gets no real income.

Therefore, we will use Permissioned Proposer to allow any decentralized Prover to submit blocks and receive corresponding rewards, which is very important. In addition, if the system fails, a Prover submits a validity proof, and an inconsistent proof is also submitted, then the smart contract can know and pause. It will recognize why there are two correct validity proofs on different blocks? When this happens, it will be suspended immediately, which will cause a time delay. As Alex said, we cannot currently feel assured of a completely permissionless and trustless implementation, and we have to work hard to achieve a balance.

[Mikhail Komarov]

We have considered this problem from the beginning. Some people's initial solution was a top-down approach, such as deciding to create a Rollup, and then considering the order of decentralization, such as decentralizing the Sequencer first. Then decentralizing the Prover, one link after another. In contrast, we took a different approach, we solved the problem from the bottom up.

We first built a decentralized Prover network to pool computing power without permission. Then we tried to add Sequencer on top of the Prover network, because Sequencer must be closely integrated with the Prover network, especially with a mature decentralized Prover network. This involves issues such as the need to pay additional proof fees and the complexity of communication, so Sequencer must be closely integrated with the Prover network to ensure its effectiveness. The system we developed can serve as the underlying infrastructure for ZK Rollup.

To ensure that all proof generation has an incentive mechanism to speed up completion, improve quality and maintain security, we introduced a proof market to manage the generation and ordering of all proofs. At the same time, we keep the system decentralized and permissionless. This approach solves the problem from the bottom up, rather than from the top down.

[Brian R]

I think the approach we take is very different from other networks. It's similar to the proof market that the Nil team is building, but we take a more trustless approach. We are not currently focusing on the sequencing problem, but on the proof system to make it more robust for various calculations. This approach simplifies a lot of complexity and is conducive to putting the most computing power into the market as quickly as possible.

We want to lower the barrier to entry for developers, allowing them to build any application they want on Ethereum or any system, and have this decentralized base computing layer with zero-knowledge proofs to guarantee the correctness of the calculations.

picture

Q5. Audience: In Algorand, there is a technology called State Painting. Its basic idea is to extract the state from one consensus blockchain and "paint" it on another consensus blockchain. This technology is more like a cross-chain solution, and it also uses the zero-knowledge proof solution. In Layer2, the system consensus actually depends on the consensus of Layer1. Will this reduce the security of Layer2?

[Alex Gluchowski]

In the implementation of ZK Rollups, the flow of assets between Layer1 and Layer2 is completely trustless, and Layer2 fully inherits the security of Layer1. Regarding the transfer of assets between Layer2, if you use the native bridge of Ethereum Layer1, it is also completely trustless. However, if it does not pass through Layer1, its security depends on which cross-chain method is used to achieve the bridge.

In zkSync, we are implementing something called Hyperchain. Specifically, we will build multiple chains powered by the same circuit, which are still bridged on Ethereum. Hyperchain will provide free, completely trustless, and very cheap transactions from any chain to any other chain. This is very important when we are talking about bringing hundreds of millions or even billions of users to blockchain.

In the future, we will not be able to have trillions of transactions run on a single system or consensus. They will have to run on many different consensus systems, such as sharding, independent application chains, etc. But at the same time we need to ensure that these different chains are connectable and low-cost for communication.

For example, just like we use emails from different systems today, users can easily complete communication between different email systems. This is what we hope to achieve through Hyperchain. In addition to perfectly inheriting the security of Layer1, efficient and trustless cross-chain communication, Hyperchain can also achieve ultra-low cost of use through recursive proof.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments