Skip to content

Commit f1768c0

Browse files
committed
Fixup typos
1 parent eca706b commit f1768c0

11 files changed

+21
-21
lines changed

automation/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -264,11 +264,11 @@ There are several public Grafana dashboards available here:
264264

265265
The purpose of a public testnet is to allow end-users to try out the software and learn how to operate it. Thus, we accept sign-ups for stake to be allocated in the genesis, and commit those keys to the compiled genesis ledger.
266266

267-
For context, these keys correspond to the "Fish Keys" in the QA Net deployments, and Online Fish Keys are ommitted in a Public Testnet deployment and "Offline Fish Keys" are instead delegated to the submitted User Keys.
267+
For context, these keys correspond to the "Fish Keys" in the QA Net deployments, and Online Fish Keys are omitted in a Public Testnet deployment and "Offline Fish Keys" are instead delegated to the submitted User Keys.
268268

269269
### Generate Genesis Ledger
270270

271-
Once you have the keys for your deploymenet created, and the Staker Keys saved to a CSV, you can use them to generate a genesis ledger with the following command.
271+
Once you have the keys for your deployment created, and the Staker Keys saved to a CSV, you can use them to generate a genesis ledger with the following command.
272272

273273
```
274274
scripts/generate-keys-and-ledger.sh

buildkite/src/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Buildkite CI
22

3-
This folder contains all dhall code which is an backbone for our CI related code for buildkite.
3+
This folder contains all dhall code which is a backbone for our CI related code for buildkite.
44

55
# Structure
66

@@ -53,7 +53,7 @@ User defined value which describe current pipeline chunk of jobs to be executed.
5353
- coverage gathering - which gathers coverage artifacts and uploads it to coveralls.io
5454

5555
To reach above pipeline configuration below configuration can be provided:
56-
(non important attributes where omitted)
56+
(non-important attributes where omitted)
5757
```
5858
steps:
5959
- commands:
@@ -204,4 +204,4 @@ We want only to move dockers from gcr to dockerhub without changing version. Cur
204204
- "NEW_VERSION=3.0.0-dc6bf78"
205205
- "CODENAMES=Focal,Bullseye"
206206
- "PUBLISH=1"
207-
```
207+
```

docs/test_dsl_spec.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@ DSL.List.iter (left_partition @ right_partition) ~f:destroy
192192

193193
ALTERNATIVE TEST: keep two partitions separate with a fixed topology, with 1-2 intermediate
194194
nodes bridging the networks, then take the other bridge offline temporarily and then have them
195-
rejoin the network without topologoical restrictions and see if the chains reconverge
195+
rejoin the network without topological restrictions and see if the chains reconverge
196196

197197
##### Basic Hard Fork Test
198198

docs/testnet-guardian-runbook.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ Following is the list of events that could occur after the release
144144
|--|------|--------|-------------|--|
145145
| 1| Issues preventing the users from completing the task | Since, we already tested all the challenges, either the user did the task incorrectly or in a different way that was missed by the engineering team. Usually, the community responds quickly to issues involving the challenges. Let the user know of the alternative way to finish the task. If the errors are real protocol/product bugs, create an issue and request the user to attach coda logs to the issue| Minor | |
146146
| 2| Users' nodes crashing intermittently | If these are not one of the known bugs then create an issue for the same. Request the user to attach the latest crash report| Minor | |
147-
| 3| Users' nodes crashing persistently | If it is for a specific user, might be that they did something differently when starting the node or their environment is not as expected. For example, connection timeouts (eventually causing the dameons to crash) between daemon and prover or daemon and snark workers could be because of resource constraints. If the cause is not determined, create an issue and request the user to attach the crash report | Major | Engineering team |
147+
| 3| Users' nodes crashing persistently | If it is for a specific user, might be that they did something differently when starting the node or their environment is not as expected. For example, connection timeouts (eventually causing the daemons to crash) between daemon and prover or daemon and snark workers could be because of resource constraints. If the cause is not determined, create an issue and request the user to attach the crash report | Major | Engineering team |
148148
|4| Unstable testnet | Create an issue for the protocol team to investigate. Coordinate with the owners of this event to discuss further actions based on the findings by the protocol team | Critical | Aneesha, Brandon, Engineer investigating the issue|
149149
150150
## Change Log

nix/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -605,7 +605,7 @@ happen when you try to build anything inside the pure shell. It happens because
605605
the stack size for every process is limited, and it is shared between the
606606
current environment, the argument list, and some other things. Therefore, if
607607
your environment takes up too much space, not enough is left for the arguments.
608-
The way to fix the error is to unset some of the bigger enviornment variables,
608+
The way to fix the error is to unset some of the bigger environment variables,
609609
perhaps with
610610

611611
```bash

nix/ocaml.nix

+1-1
Original file line numberDiff line numberDiff line change
@@ -254,7 +254,7 @@ let
254254
done
255255
'') package.outputs);
256256

257-
# Derivation which has all Mina's dependencies in it, and creates an empty output if the command succeds.
257+
# Derivation which has all Mina's dependencies in it, and creates an empty output if the command succeeds.
258258
# Useful for unit tests.
259259
runMinaCheck = { name ? "check", extraInputs ? [ ], extraArgs ? { }, }:
260260
check:

nix/rust.nix

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ let
55
prev.makeRustPlatform {
66
cargo = rust;
77
rustc = rust;
8-
# override stdenv.targetPlatform here, if neccesary
8+
# override stdenv.targetPlatform here, if necessary
99
};
1010
toolchainHashes = {
1111
"1.72" = "sha256-dxE7lmCFWlq0nl/wKcmYvpP9zqQbBitAQgZ1zx9Ooik=";

rfcs/0016-transition-frontier-persistence.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ This RFC proposes a new system for persisting the transition frontier's state to
88
## Motivation
99
[motivation]: #motivation
1010

11-
The Transition Frontier is too large of a data structure to just blindly serialize and write to disk. Under non optimal network scenarios, we expect the upper bound of the data structure to be >100Gb. Even if the structure were smaller, we cannot write the structure out to disk every time we mutate it as the speed of the transition frontier data structure is critical to the systems ability to prevent DDoS attacks. Therefore, a more robust and effecient system is required to persist the Transition Frontier to disk without negatively effecting the speed of operations on the in memory copy of the Transition Frontier.
11+
The Transition Frontier is too large of a data structure to just blindly serialize and write to disk. Under non-optimal network scenarios, we expect the upper bound of the data structure to be >100Gb. Even if the structure were smaller, we cannot write the structure out to disk every time we mutate it as the speed of the transition frontier data structure is critical to the systems ability to prevent DDoS attacks. Therefore, a more robust and effecient system is required to persist the Transition Frontier to disk without negatively effecting the speed of operations on the in memory copy of the Transition Frontier.
1212

1313
## Detailed design
1414
[detailed-design]: #detailed-design
@@ -30,7 +30,7 @@ As actions are performed on the Transition Frontier, diffs are emitted and store
3030

3131
Having two different mechanisms for writing the same data can be tricky as there can be bugs in one of the two mechanisms that would cause the data structures to become desynchronized. In order to help prevent this, we can introduce an incremental hash on top of the Transition Frontier which can be updated upon each diff application. This hash will give a direct and easy way to compare the structural equality of the two data structures. Being incremental, however, also means that the order of diff application needs to be the same across both data structures, so care needs to be taken with that ordering. Therefore, in a sense, this hash will represent the structure and content of the data structure, as well as the order in which actions were taken to get there. We only care about the former in our case, and the latter is just a consequence of the hash being incremental.
3232

33-
In order to calculate this hash correctly, we need to introduce a new concept to a diff, which is that of a diff mutant. Each diff represents some mutation to perform on the Transition Frontier, however not every diff will contain the enough information by itself to encapsulate the state of the data structure after the mutation occurs. For example, setting a balance on an account in two implementations of the data structure does not guarantee that the accounts in each a equal as there are other fields on the account besides that. This is where the concept of a diff mutant comes in. The mutant of a diff is the set of all modified values in the data structure after the diff has been applied. Using this, we can create a proper incremental diff which will truly ensure our data structures are in sync.
33+
In order to calculate this hash correctly, we need to introduce a new concept to a diff, which is that of a diff mutant. Each diff represents some mutation to perform on the Transition Frontier, however not every diff will contain the enough information by itself to encapsulate the state of the data structure after the mutation occurs. For example, setting a balance on an account in two implementations of the data structure does not guarantee that the accounts in each an equal as there are other fields on the account besides that. This is where the concept of a diff mutant comes in. The mutant of a diff is the set of all modified values in the data structure after the diff has been applied. Using this, we can create a proper incremental diff which will truly ensure our data structures are in sync.
3434

3535
These hashes will be Sha256 as there is no reason to use the Pedersen hashing algorithm we use in the rest of our code since none of this information needs to be snarked. The formula for calculating a new hash `h'` given an old hash `h` and a diff `diff` is as follows: `h' = sha256 h diff (Diff.mutant diff)`.
3636

rfcs/0020-transition-frontier-extensions-2.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ See [#1585](https://github.com/CodaProtocol/coda/pull/1585) for early discussion
2828

2929
### Extensions Redefined
3030

31-
A Transition Frontier Extension is an stateful, incremental view on the state of a Transiton Frontier. When a Transition Frontier is initialized, all of its extensions are also initialized using the Transition Frontier's root. Every mutation performed is represented as a list of diffs, and when the Transition Frontier updates, each Extension is notified of this list of diffs synchronously. Transition Frontier Extensions will notify the Transition Frontier if there was a update to the Extension's view when handling the diffs. If an Extension's view is updated, then a synchronous event is broadcast internally with the new view of that Extension. A Transition Frontier Extension has access to the Transition Frontier so that it can query and calculate information it requires when it handles diffs.
31+
A Transition Frontier Extension is a stateful, incremental view on the state of a Transiton Frontier. When a Transition Frontier is initialized, all of its extensions are also initialized using the Transition Frontier's root. Every mutation performed is represented as a list of diffs, and when the Transition Frontier updates, each Extension is notified of this list of diffs synchronously. Transition Frontier Extensions will notify the Transition Frontier if there was a update to the Extension's view when handling the diffs. If an Extension's view is updated, then a synchronous event is broadcast internally with the new view of that Extension. A Transition Frontier Extension has access to the Transition Frontier so that it can query and calculate information it requires when it handles diffs.
3232

3333
### Extension Guidelines
3434

rfcs/0026-transition-caching.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ A new transition caching logic for the transition router which aims to track tra
88
## Motivation
99
[motivation]: #motivation
1010

11-
Within the transition router system, the only check for duplicate transitions is performed by the transition validator, and each transition is only checked against the transitions which are currently in the transition frontier. However, there are two types of duplicate transitions which are not being checked for: transitions which are still being processed by the system (either in the processor pipe or in the catchup scheduler and catchup thread), and transitions which have been determined to be invalid. In the case of the former, the system ends up processing more transitions than necessary, and the number of duplicated processing increases along with the networks size. In the case of the latter, the system is opened up for DDoS attacks since an adversary could continously send transitions with valid proofs but invalid staged ledger diffs, causing each node to spend a significant enough amount of time before invalidating the transition each time it recieves it.
11+
Within the transition router system, the only check for duplicate transitions is performed by the transition validator, and each transition is only checked against the transitions which are currently in the transition frontier. However, there are two types of duplicate transitions which are not being checked for: transitions which are still being processed by the system (either in the processor pipe or in the catchup scheduler and catchup thread), and transitions which have been determined to be invalid. In the case of the former, the system ends up processing more transitions than necessary, and the number of duplicated processing increases along with the networks size. In the case of the latter, the system is opened up for DDoS attacks since an adversary could continuously send transitions with valid proofs but invalid staged ledger diffs, causing each node to spend a significant enough amount of time before invalidating the transition each time it recieves it.
1212

1313
NOTE: This RFC has been re-scoped to only address duplicate transitions already being processed and not transitions which were previously determined to be invalid.
1414

src/lib/mina_networking/README.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Typing Conventions
44

5-
In the context of this document, we will describe query and response types using a psuedo type-system. Tuples of data are in the form `(a, ..., b)`, lists are in the form `[a]`, and polymorphic types are represented as functions returning types. For example, we use the standard polymorphic types `optional :: type -> type` and `result :: type -> type -> type` throughout this document. The `optional` type constructor means that a value can be null, and a `result` type constructor means that there is 1 of 2 possible return types (typically a success type and an error type). For example, `optional int` might be an int or null, where as `result int error` is either an int or an error.
5+
In the context of this document, we will describe query and response types using a pseudo type-system. Tuples of data are in the form `(a, ..., b)`, lists are in the form `[a]`, and polymorphic types are represented as functions returning types. For example, we use the standard polymorphic types `optional :: type -> type` and `result :: type -> type -> type` throughout this document. The `optional` type constructor means that a value can be null, and a `result` type constructor means that there is 1 of 2 possible return types (typically a success type and an error type). For example, `optional int` might be an int or null, where as `result int error` is either an int or an error.
66

77
### Relevant types
88

@@ -15,7 +15,7 @@ In the context of this document, we will describe query and response types using
1515
- `protocol_state` == the proven contents of a block (contains `consensus_state`)
1616
- `block` == an entire block (contains `protocol_state` and the staged ledger diff for that block)
1717
- `staged_ledger` == the data structure which represents the intermediate (unsnarked) ledger state of the network (large)
18-
- `pending_coinbase` == a auxilliary hash which identifies some state related to the staged ledger
18+
- `pending_coinbase` == an auxilliary hash which identifies some state related to the staged ledger
1919
- `sync_ledger_query` == queries for performing sync ledger protocol (requests for hashes or batches of subtrees of a merkle tree)
2020
- `sync_ledger_response` == responses for handling sync ledger protocol (responses of hashes or batches of subtrees of a merkle tree)
2121
- `transaction_pool_diff` == a bundle of multiple transactions to be included into the blockchain
@@ -34,13 +34,13 @@ Broadcasts newly produced blocks throughout the network.
3434

3535
**Data**: `transaction_pool_diff`
3636

37-
Broadcasts transactions from mempools throughout the network. Nodes broadcast locally submitted transactions on an interval for a period of time after creation, as well as rebroadcast externally submitted transactions if they were relevant and could be added to the their mempool.
37+
Broadcasts transactions from mempools throughout the network. Nodes broadcast locally submitted transactions on an interval for a period of time after creation, as well as rebroadcast externally submitted transactions if they were relevant and could be added to their mempool.
3838

3939
### snark\_pool\_diffs
4040

4141
**Data**: `snark_pool_diff`
4242

43-
Broadcasts snark work from mempools throughout the network. Snark coordinator's broadcast locally produced snarks on an interval for a period of time after creation, and all nodes rebroadcast externally produced snarks if they were relevant and could be added to the their mempool.
43+
Broadcasts snark work from mempools throughout the network. Snark coordinator's broadcast locally produced snarks on an interval for a period of time after creation, and all nodes rebroadcast externally produced snarks if they were relevant and could be added to their mempool.
4444

4545
## RPC Messages
4646

@@ -82,7 +82,7 @@ Serves merkle ledger information over a "sync ledger" protocol. The sync ledger
8282

8383
**Response**: `optional [block]`
8484

85-
Returns a bulk bulk set of blocks associated with a provided set of state hashes. This is used by the catchup routine when it is downloading old blocks to re synchronize with the network over short distances of missing information. At the current moment, the maximum number of blocks that can be requested in a single batch is 20 (requesting more than 20 will result in no response).
85+
Returns a bulk set of blocks associated with a provided set of state hashes. This is used by the catchup routine when it is downloading old blocks to re synchronize with the network over short distances of missing information. At the current moment, the maximum number of blocks that can be requested in a single batch is 20 (requesting more than 20 will result in no response).
8686

8787
### get\_transition\_chain\_proof
8888

@@ -100,7 +100,7 @@ Returns a transition chain proof for a specified block on the blockchain. A tran
100100

101101
**Response**: `[state_hash]`
102102

103-
Returns the a list of `k` state hashes of blocks from the root of the frontier (point of finality) up to the current best tip (most recent block on the canonical chain).
103+
Returns the list of `k` state hashes of blocks from the root of the frontier (point of finality) up to the current best tip (most recent block on the canonical chain).
104104

105105
### get\_ancestry
106106

@@ -136,4 +136,4 @@ Returns the best tip block along with the root block of the frontier, and a merk
136136

137137
**Response**: `result node_status error`
138138

139-
This acts as a telemetry RPC which asks a peer to provide invformation about their node status. Daemons do not have to respond to this request, and node operators may pass a command line flag to opt-out of responding to these node status requests.
139+
This acts as a telemetry RPC which asks a peer to provide information about their node status. Daemons do not have to respond to this request, and node operators may pass a command line flag to opt-out of responding to these node status requests.

0 commit comments

Comments
 (0)