You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: automation/README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -264,11 +264,11 @@ There are several public Grafana dashboards available here:
264
264
265
265
The purpose of a public testnet is to allow end-users to try out the software and learn how to operate it. Thus, we accept sign-ups for stake to be allocated in the genesis, and commit those keys to the compiled genesis ledger.
266
266
267
-
For context, these keys correspond to the "Fish Keys" in the QA Net deployments, and Online Fish Keys are ommitted in a Public Testnet deployment and "Offline Fish Keys" are instead delegated to the submitted User Keys.
267
+
For context, these keys correspond to the "Fish Keys" in the QA Net deployments, and Online Fish Keys are omitted in a Public Testnet deployment and "Offline Fish Keys" are instead delegated to the submitted User Keys.
268
268
269
269
### Generate Genesis Ledger
270
270
271
-
Once you have the keys for your deploymenet created, and the Staker Keys saved to a CSV, you can use them to generate a genesis ledger with the following command.
271
+
Once you have the keys for your deployment created, and the Staker Keys saved to a CSV, you can use them to generate a genesis ledger with the following command.
Copy file name to clipboardexpand all lines: docs/testnet-guardian-runbook.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -144,7 +144,7 @@ Following is the list of events that could occur after the release
144
144
|--|------|--------|-------------|--|
145
145
| 1| Issues preventing the users from completing the task | Since, we already tested all the challenges, either the user did the task incorrectly or in a different way that was missed by the engineering team. Usually, the community responds quickly to issues involving the challenges. Let the user know of the alternative way to finish the task. If the errors are real protocol/product bugs, create an issue and request the user to attach coda logs to the issue| Minor | |
146
146
| 2| Users' nodes crashing intermittently | If these are not one of the known bugs then create an issue for the same. Request the user to attach the latest crash report| Minor | |
147
-
| 3| Users' nodes crashing persistently | If it is for a specific user, might be that they did something differently when starting the node or their environment is not as expected. For example, connection timeouts (eventually causing the dameons to crash) between daemon and prover or daemon and snark workers could be because of resource constraints. If the cause is not determined, create an issue and request the user to attach the crash report | Major | Engineering team |
147
+
| 3| Users' nodes crashing persistently | If it is for a specific user, might be that they did something differently when starting the node or their environment is not as expected. For example, connection timeouts (eventually causing the daemons to crash) between daemon and prover or daemon and snark workers could be because of resource constraints. If the cause is not determined, create an issue and request the user to attach the crash report | Major | Engineering team |
148
148
|4| Unstable testnet | Create an issue for the protocol team to investigate. Coordinate with the owners of this event to discuss further actions based on the findings by the protocol team | Critical | Aneesha, Brandon, Engineer investigating the issue|
Copy file name to clipboardexpand all lines: rfcs/0016-transition-frontier-persistence.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ This RFC proposes a new system for persisting the transition frontier's state to
8
8
## Motivation
9
9
[motivation]: #motivation
10
10
11
-
The Transition Frontier is too large of a data structure to just blindly serialize and write to disk. Under nonoptimal network scenarios, we expect the upper bound of the data structure to be >100Gb. Even if the structure were smaller, we cannot write the structure out to disk every time we mutate it as the speed of the transition frontier data structure is critical to the systems ability to prevent DDoS attacks. Therefore, a more robust and effecient system is required to persist the Transition Frontier to disk without negatively effecting the speed of operations on the in memory copy of the Transition Frontier.
11
+
The Transition Frontier is too large of a data structure to just blindly serialize and write to disk. Under non-optimal network scenarios, we expect the upper bound of the data structure to be >100Gb. Even if the structure were smaller, we cannot write the structure out to disk every time we mutate it as the speed of the transition frontier data structure is critical to the systems ability to prevent DDoS attacks. Therefore, a more robust and effecient system is required to persist the Transition Frontier to disk without negatively effecting the speed of operations on the in memory copy of the Transition Frontier.
12
12
13
13
## Detailed design
14
14
[detailed-design]: #detailed-design
@@ -30,7 +30,7 @@ As actions are performed on the Transition Frontier, diffs are emitted and store
30
30
31
31
Having two different mechanisms for writing the same data can be tricky as there can be bugs in one of the two mechanisms that would cause the data structures to become desynchronized. In order to help prevent this, we can introduce an incremental hash on top of the Transition Frontier which can be updated upon each diff application. This hash will give a direct and easy way to compare the structural equality of the two data structures. Being incremental, however, also means that the order of diff application needs to be the same across both data structures, so care needs to be taken with that ordering. Therefore, in a sense, this hash will represent the structure and content of the data structure, as well as the order in which actions were taken to get there. We only care about the former in our case, and the latter is just a consequence of the hash being incremental.
32
32
33
-
In order to calculate this hash correctly, we need to introduce a new concept to a diff, which is that of a diff mutant. Each diff represents some mutation to perform on the Transition Frontier, however not every diff will contain the enough information by itself to encapsulate the state of the data structure after the mutation occurs. For example, setting a balance on an account in two implementations of the data structure does not guarantee that the accounts in each a equal as there are other fields on the account besides that. This is where the concept of a diff mutant comes in. The mutant of a diff is the set of all modified values in the data structure after the diff has been applied. Using this, we can create a proper incremental diff which will truly ensure our data structures are in sync.
33
+
In order to calculate this hash correctly, we need to introduce a new concept to a diff, which is that of a diff mutant. Each diff represents some mutation to perform on the Transition Frontier, however not every diff will contain the enough information by itself to encapsulate the state of the data structure after the mutation occurs. For example, setting a balance on an account in two implementations of the data structure does not guarantee that the accounts in each an equal as there are other fields on the account besides that. This is where the concept of a diff mutant comes in. The mutant of a diff is the set of all modified values in the data structure after the diff has been applied. Using this, we can create a proper incremental diff which will truly ensure our data structures are in sync.
34
34
35
35
These hashes will be Sha256 as there is no reason to use the Pedersen hashing algorithm we use in the rest of our code since none of this information needs to be snarked. The formula for calculating a new hash `h'` given an old hash `h` and a diff `diff` is as follows: `h' = sha256 h diff (Diff.mutant diff)`.
Copy file name to clipboardexpand all lines: rfcs/0020-transition-frontier-extensions-2.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ See [#1585](https://github.com/CodaProtocol/coda/pull/1585) for early discussion
28
28
29
29
### Extensions Redefined
30
30
31
-
A Transition Frontier Extension is an stateful, incremental view on the state of a Transiton Frontier. When a Transition Frontier is initialized, all of its extensions are also initialized using the Transition Frontier's root. Every mutation performed is represented as a list of diffs, and when the Transition Frontier updates, each Extension is notified of this list of diffs synchronously. Transition Frontier Extensions will notify the Transition Frontier if there was a update to the Extension's view when handling the diffs. If an Extension's view is updated, then a synchronous event is broadcast internally with the new view of that Extension. A Transition Frontier Extension has access to the Transition Frontier so that it can query and calculate information it requires when it handles diffs.
31
+
A Transition Frontier Extension is a stateful, incremental view on the state of a Transiton Frontier. When a Transition Frontier is initialized, all of its extensions are also initialized using the Transition Frontier's root. Every mutation performed is represented as a list of diffs, and when the Transition Frontier updates, each Extension is notified of this list of diffs synchronously. Transition Frontier Extensions will notify the Transition Frontier if there was a update to the Extension's view when handling the diffs. If an Extension's view is updated, then a synchronous event is broadcast internally with the new view of that Extension. A Transition Frontier Extension has access to the Transition Frontier so that it can query and calculate information it requires when it handles diffs.
Copy file name to clipboardexpand all lines: rfcs/0026-transition-caching.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ A new transition caching logic for the transition router which aims to track tra
8
8
## Motivation
9
9
[motivation]: #motivation
10
10
11
-
Within the transition router system, the only check for duplicate transitions is performed by the transition validator, and each transition is only checked against the transitions which are currently in the transition frontier. However, there are two types of duplicate transitions which are not being checked for: transitions which are still being processed by the system (either in the processor pipe or in the catchup scheduler and catchup thread), and transitions which have been determined to be invalid. In the case of the former, the system ends up processing more transitions than necessary, and the number of duplicated processing increases along with the networks size. In the case of the latter, the system is opened up for DDoS attacks since an adversary could continously send transitions with valid proofs but invalid staged ledger diffs, causing each node to spend a significant enough amount of time before invalidating the transition each time it recieves it.
11
+
Within the transition router system, the only check for duplicate transitions is performed by the transition validator, and each transition is only checked against the transitions which are currently in the transition frontier. However, there are two types of duplicate transitions which are not being checked for: transitions which are still being processed by the system (either in the processor pipe or in the catchup scheduler and catchup thread), and transitions which have been determined to be invalid. In the case of the former, the system ends up processing more transitions than necessary, and the number of duplicated processing increases along with the networks size. In the case of the latter, the system is opened up for DDoS attacks since an adversary could continuously send transitions with valid proofs but invalid staged ledger diffs, causing each node to spend a significant enough amount of time before invalidating the transition each time it recieves it.
12
12
13
13
NOTE: This RFC has been re-scoped to only address duplicate transitions already being processed and not transitions which were previously determined to be invalid.
Copy file name to clipboardexpand all lines: src/lib/mina_networking/README.md
+7-7
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
## Typing Conventions
4
4
5
-
In the context of this document, we will describe query and response types using a psuedo type-system. Tuples of data are in the form `(a, ..., b)`, lists are in the form `[a]`, and polymorphic types are represented as functions returning types. For example, we use the standard polymorphic types `optional :: type -> type` and `result :: type -> type -> type` throughout this document. The `optional` type constructor means that a value can be null, and a `result` type constructor means that there is 1 of 2 possible return types (typically a success type and an error type). For example, `optional int` might be an int or null, where as `result int error` is either an int or an error.
5
+
In the context of this document, we will describe query and response types using a pseudo type-system. Tuples of data are in the form `(a, ..., b)`, lists are in the form `[a]`, and polymorphic types are represented as functions returning types. For example, we use the standard polymorphic types `optional :: type -> type` and `result :: type -> type -> type` throughout this document. The `optional` type constructor means that a value can be null, and a `result` type constructor means that there is 1 of 2 possible return types (typically a success type and an error type). For example, `optional int` might be an int or null, where as `result int error` is either an int or an error.
6
6
7
7
### Relevant types
8
8
@@ -15,7 +15,7 @@ In the context of this document, we will describe query and response types using
15
15
-`protocol_state` == the proven contents of a block (contains `consensus_state`)
16
16
-`block` == an entire block (contains `protocol_state` and the staged ledger diff for that block)
17
17
-`staged_ledger` == the data structure which represents the intermediate (unsnarked) ledger state of the network (large)
18
-
-`pending_coinbase` == a auxilliary hash which identifies some state related to the staged ledger
18
+
-`pending_coinbase` == an auxilliary hash which identifies some state related to the staged ledger
19
19
-`sync_ledger_query` == queries for performing sync ledger protocol (requests for hashes or batches of subtrees of a merkle tree)
20
20
-`sync_ledger_response` == responses for handling sync ledger protocol (responses of hashes or batches of subtrees of a merkle tree)
21
21
-`transaction_pool_diff` == a bundle of multiple transactions to be included into the blockchain
@@ -34,13 +34,13 @@ Broadcasts newly produced blocks throughout the network.
34
34
35
35
**Data**: `transaction_pool_diff`
36
36
37
-
Broadcasts transactions from mempools throughout the network. Nodes broadcast locally submitted transactions on an interval for a period of time after creation, as well as rebroadcast externally submitted transactions if they were relevant and could be added to the their mempool.
37
+
Broadcasts transactions from mempools throughout the network. Nodes broadcast locally submitted transactions on an interval for a period of time after creation, as well as rebroadcast externally submitted transactions if they were relevant and could be added to their mempool.
38
38
39
39
### snark\_pool\_diffs
40
40
41
41
**Data**: `snark_pool_diff`
42
42
43
-
Broadcasts snark work from mempools throughout the network. Snark coordinator's broadcast locally produced snarks on an interval for a period of time after creation, and all nodes rebroadcast externally produced snarks if they were relevant and could be added to the their mempool.
43
+
Broadcasts snark work from mempools throughout the network. Snark coordinator's broadcast locally produced snarks on an interval for a period of time after creation, and all nodes rebroadcast externally produced snarks if they were relevant and could be added to their mempool.
44
44
45
45
## RPC Messages
46
46
@@ -82,7 +82,7 @@ Serves merkle ledger information over a "sync ledger" protocol. The sync ledger
82
82
83
83
**Response**: `optional [block]`
84
84
85
-
Returns a bulk bulk set of blocks associated with a provided set of state hashes. This is used by the catchup routine when it is downloading old blocks to re synchronize with the network over short distances of missing information. At the current moment, the maximum number of blocks that can be requested in a single batch is 20 (requesting more than 20 will result in no response).
85
+
Returns a bulk set of blocks associated with a provided set of state hashes. This is used by the catchup routine when it is downloading old blocks to re synchronize with the network over short distances of missing information. At the current moment, the maximum number of blocks that can be requested in a single batch is 20 (requesting more than 20 will result in no response).
86
86
87
87
### get\_transition\_chain\_proof
88
88
@@ -100,7 +100,7 @@ Returns a transition chain proof for a specified block on the blockchain. A tran
100
100
101
101
**Response**: `[state_hash]`
102
102
103
-
Returns the a list of `k` state hashes of blocks from the root of the frontier (point of finality) up to the current best tip (most recent block on the canonical chain).
103
+
Returns the list of `k` state hashes of blocks from the root of the frontier (point of finality) up to the current best tip (most recent block on the canonical chain).
104
104
105
105
### get\_ancestry
106
106
@@ -136,4 +136,4 @@ Returns the best tip block along with the root block of the frontier, and a merk
136
136
137
137
**Response**: `result node_status error`
138
138
139
-
This acts as a telemetry RPC which asks a peer to provide invformation about their node status. Daemons do not have to respond to this request, and node operators may pass a command line flag to opt-out of responding to these node status requests.
139
+
This acts as a telemetry RPC which asks a peer to provide information about their node status. Daemons do not have to respond to this request, and node operators may pass a command line flag to opt-out of responding to these node status requests.
0 commit comments