2nd assignment
7
z1/.gitignore
vendored
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
frontend/node_modules
|
||||||
|
go.sum
|
||||||
|
social-network
|
||||||
|
users.db
|
||||||
|
fiber
|
||||||
|
frontend/.next/
|
||||||
|
/out/
|
Before Width: | Height: | Size: 11 KiB After Width: | Height: | Size: 11 KiB |
Before Width: | Height: | Size: 5.9 KiB After Width: | Height: | Size: 5.9 KiB |
Before Width: | Height: | Size: 13 KiB After Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 268 B After Width: | Height: | Size: 268 B |
Before Width: | Height: | Size: 9.4 KiB After Width: | Height: | Size: 9.4 KiB |
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
Before Width: | Height: | Size: 7.4 KiB After Width: | Height: | Size: 7.4 KiB |
Before Width: | Height: | Size: 9.4 KiB After Width: | Height: | Size: 9.4 KiB |
Before Width: | Height: | Size: 13 KiB After Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 8.9 KiB After Width: | Height: | Size: 8.9 KiB |
Before Width: | Height: | Size: 5.9 KiB After Width: | Height: | Size: 5.9 KiB |
Before Width: | Height: | Size: 2.1 KiB After Width: | Height: | Size: 2.1 KiB |
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 10 KiB |
Before Width: | Height: | Size: 5.1 KiB After Width: | Height: | Size: 5.1 KiB |
Before Width: | Height: | Size: 5.9 KiB After Width: | Height: | Size: 5.9 KiB |
Before Width: | Height: | Size: 5.9 KiB After Width: | Height: | Size: 5.9 KiB |
Before Width: | Height: | Size: 19 KiB After Width: | Height: | Size: 19 KiB |
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 29 KiB After Width: | Height: | Size: 29 KiB |
Before Width: | Height: | Size: 46 KiB After Width: | Height: | Size: 46 KiB |
108
z2/README.md
Normal file
@ -0,0 +1,108 @@
|
|||||||
|
# Veritas Revelata
|
||||||
|
|
||||||
|
The aim is to establish a robust framwork to combat
|
||||||
|
misinfomation, disinformation, and fake news, thereby providing a more re-
|
||||||
|
liable information system. This paper outlines the key components of the
|
||||||
|
proposal, including source evaluation, fact-checking, technological require-
|
||||||
|
ments, while also addressing the challenges and the ethical consideration
|
||||||
|
associated with such system.
|
||||||
|
|
||||||
|
the app contains of 2 main components a golang p2p node that runs the consesnsus algorithm for the blockchain
|
||||||
|
|
||||||
|
A decentralized frontend served over the internet or locally
|
||||||
|
|
||||||
|
|
||||||
|
# Containers used
|
||||||
|
- golang build: compliation container for compilling the app
|
||||||
|
- golang runner: runs the p2p node and the rpc server
|
||||||
|
- fe builder: compile and build the nodejs app for serving
|
||||||
|
- fe runner: serves the static files through SSR techonology for
|
||||||
|
|
||||||
|
# DEVOPS
|
||||||
|
The app requires a local registry running for develpoment use cases
|
||||||
|
The minikube local registry can be ran with:
|
||||||
|
```sh
|
||||||
|
minikube addons enable registry
|
||||||
|
```
|
||||||
|
running the applications requires
|
||||||
|
```sh
|
||||||
|
./start-app.sh
|
||||||
|
```
|
||||||
|
stopping it requires
|
||||||
|
```sh
|
||||||
|
./stop-app.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
for checking the cluster with minikube you can run:
|
||||||
|
```sh
|
||||||
|
minikube dashboard
|
||||||
|
```
|
||||||
|
|
||||||
|
## Kubernetes
|
||||||
|
first we have the deployments:
|
||||||
|
### p2p-node Deployment:
|
||||||
|
|
||||||
|
- **Metadata**: The name of the deployment is p2p-node and it resides in the veritas namespace.
|
||||||
|
|
||||||
|
- **Replicas**: The desired number of replicas for this deployment is 1, meaning there should be one Pod running at any given time.
|
||||||
|
|
||||||
|
- **Pod Template**: This is the specification for creating Pods. It includes a single container.
|
||||||
|
|
||||||
|
- **Container**: The container is named p2pnode and uses the image localhost:5000/veritasnode. The image pull policy is IfNotPresent, meaning the image will be pulled only if it's not already present on the node.
|
||||||
|
|
||||||
|
- **Environment Variable**: There's an environment variable PORT set to 6000.
|
||||||
|
|
||||||
|
- **Ports**: The container exposes port 6000 for TCP traffic, and this port is also mapped to the host's port 6000.
|
||||||
|
|
||||||
|
- **Volume** Mounts: There's a volume mounted at /app using the volume named node-data.
|
||||||
|
|
||||||
|
- **Volumes**: The volume node-data is a PersistentVolumeClaim using the claim node-data.
|
||||||
|
### veritasfe Deployment:
|
||||||
|
|
||||||
|
- **Metadata**: The name of the deployment is veritasfe and it also resides in the veritas namespace.
|
||||||
|
- **Replicas**: The desired number of replicas for this deployment is also 1.
|
||||||
|
- **Selector**: The selector is looking for Pods with the label app: veritasfe.
|
||||||
|
- **Pod Template**:
|
||||||
|
- **Labels**: The Pod labels are app: veritasfe.
|
||||||
|
- **Container**: The container is named frontend and uses the image localhost:5000/veritasfe.
|
||||||
|
- **Ports**: The container exposes port 3000 for TCP traffic, and this port is also mapped to the host's port 3000.
|
||||||
|
- **Restart Policy**: The restart policy for the Pod is Always, meaning the containers in the Pod will always be restarted if they terminate.
|
||||||
|
|
||||||
|
### p2p-node Service
|
||||||
|
|
||||||
|
The `p2p-node` service is a service resource that exposes the `p2p-node`
|
||||||
|
|
||||||
|
- **Metadata:**
|
||||||
|
- Labels: `app: p2p-node`
|
||||||
|
- Name: `p2p-node`
|
||||||
|
- Namespace: `veritas`
|
||||||
|
|
||||||
|
- **Spec:**
|
||||||
|
- The service listens on port `6000` and forwards traffic to the target port `6000` on the `p2p-node` Pod.
|
||||||
|
|
||||||
|
- **Ports:**
|
||||||
|
- Name: `6000`
|
||||||
|
Port: `6000`
|
||||||
|
Target Port: `6000`
|
||||||
|
|
||||||
|
### frontend Service
|
||||||
|
|
||||||
|
The `frontend` service is a ervice resource that exposes the `frontend` deployment
|
||||||
|
|
||||||
|
- **Metadata:**
|
||||||
|
- Labels: `app: frontend`
|
||||||
|
- Name: `frontend`
|
||||||
|
- Namespace: `veritas`
|
||||||
|
|
||||||
|
- **Spec:**
|
||||||
|
- The service listens on port `3000` and forwards traffic to the target port `3000` on the `frontend` Pod.
|
||||||
|
|
||||||
|
- **Ports:**
|
||||||
|
- Name: `3000`
|
||||||
|
Port: `3000`
|
||||||
|
Target Port: `3000`
|
||||||
|
|
||||||
|
# Accessibility
|
||||||
|
the frontend can be accessed through localhost:3000 on any web browser
|
||||||
|
and the rpc server runs on localhost:6000
|
||||||
|
|
14
z2/backend/Dockerfile
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
FROM golang AS build
|
||||||
|
# RUN apk --no-cache add gcc g++ make git
|
||||||
|
WORKDIR /go/src/app
|
||||||
|
COPY . .
|
||||||
|
RUN go mod tidy
|
||||||
|
# RUN GOOS=linux go build -ldflags="-s -w" -o ./bin/web-app
|
||||||
|
RUN GOOS=linux go build -o ./bin/veritas
|
||||||
|
|
||||||
|
FROM ubuntu
|
||||||
|
RUN mkdir /app
|
||||||
|
WORKDIR /app
|
||||||
|
COPY --from=build /go/src/app/bin /go/bin
|
||||||
|
EXPOSE 6000
|
||||||
|
ENTRYPOINT /go/bin/veritas
|
12
z2/backend/core/block.go
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
package core
|
||||||
|
|
||||||
|
type Block struct {
|
||||||
|
Hash string
|
||||||
|
Nonce int
|
||||||
|
Length int
|
||||||
|
PreviousHash string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b Block) Serialize() ([]byte, error) {
|
||||||
|
return []byte(b.Hash), nil
|
||||||
|
}
|
193
z2/backend/core/consensus.go
Normal file
@ -0,0 +1,193 @@
|
|||||||
|
package core
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"hash"
|
||||||
|
"log"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"thesis/crypto"
|
||||||
|
"thesis/ent"
|
||||||
|
)
|
||||||
|
|
||||||
|
/*
|
||||||
|
** Type 1: Transaction sync
|
||||||
|
** Type 2: New Transaction cast
|
||||||
|
*/
|
||||||
|
type DataPacket struct {
|
||||||
|
Type int
|
||||||
|
Data []byte
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p DataPacket) Serialize() []byte {
|
||||||
|
data, err := json.Marshal(p)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
func DeSerialize(p []byte) DataPacket {
|
||||||
|
|
||||||
|
var msg DataPacket
|
||||||
|
err := json.Unmarshal(p, &msg)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return msg
|
||||||
|
}
|
||||||
|
|
||||||
|
func SyncContent(msg DataPacket) {
|
||||||
|
if msg.Type == 1 {
|
||||||
|
fmt.Println("got a tx sync")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func ValidateContent(client *ent.Client, pk *crypto.PublicKey) error {
|
||||||
|
txs, err := client.Transactions.Query().All(context.Background())
|
||||||
|
handleError(err)
|
||||||
|
for i := 0; i < len(txs); i++ {
|
||||||
|
currentTx := &Transaction{
|
||||||
|
Type: txs[i].Type,
|
||||||
|
Timestamp: int64(txs[i].Timestamp),
|
||||||
|
Comment: txs[i].Comment,
|
||||||
|
Content: txs[i].Content,
|
||||||
|
Hash: txs[i].Hash,
|
||||||
|
Signature: txs[i].Signature,
|
||||||
|
}
|
||||||
|
// fmt.Println(currentTx)
|
||||||
|
err := crypto.ValidateSignature(currentTx.Signature, pk, []byte(currentTx.Hash))
|
||||||
|
handleError(err)
|
||||||
|
log.Printf("Valid signature for tx: %s", currentTx.Hash)
|
||||||
|
}
|
||||||
|
|
||||||
|
blocks, err := client.Blocks.Query().All(context.Background())
|
||||||
|
handleError(err)
|
||||||
|
for i := 0; i < len(blocks); i++ {
|
||||||
|
// currentBlock := &core.Block{
|
||||||
|
// Hash: blocks[i].Hash,
|
||||||
|
// Nonce: blocks[i].ID,
|
||||||
|
// Length: blocks[i].Length,
|
||||||
|
// PreviousHash: blocks[i].PreviousHash,
|
||||||
|
// }
|
||||||
|
|
||||||
|
txs, err := blocks[i].QueryMinedTxs().All(context.Background())
|
||||||
|
handleError(err)
|
||||||
|
var hash hash.Hash
|
||||||
|
for j := 0; j < len(txs); j++ {
|
||||||
|
tx := &Transaction{
|
||||||
|
Type: txs[j].Type,
|
||||||
|
Timestamp: int64(txs[j].Timestamp),
|
||||||
|
Comment: txs[j].Comment,
|
||||||
|
Content: txs[j].Content,
|
||||||
|
Hash: txs[j].Hash,
|
||||||
|
Signature: txs[j].Signature,
|
||||||
|
}
|
||||||
|
txBytes, err := json.Marshal(tx)
|
||||||
|
handleError(err)
|
||||||
|
hash = sha256.New()
|
||||||
|
hash.Write(txBytes)
|
||||||
|
}
|
||||||
|
if fmt.Sprintf("0x%x", string(hash.Sum(nil))) == blocks[i].Hash {
|
||||||
|
log.Printf("Block %d validated \n", blocks[i].ID)
|
||||||
|
} else {
|
||||||
|
log.Printf("Block %d is invalid !!!\n", blocks[i].ID)
|
||||||
|
return errors.New("invalid block detected")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func AddNewTx(client *ent.Client, content []byte, commit string, sig string, key *ent.Key) error {
|
||||||
|
|
||||||
|
// key := db.GetKeyFromHex(pk)
|
||||||
|
// if key == nil {
|
||||||
|
// return
|
||||||
|
// }
|
||||||
|
tx := &Transaction{
|
||||||
|
Type: 2,
|
||||||
|
Timestamp: time.Now().Unix(),
|
||||||
|
Comment: "regular tx",
|
||||||
|
Content: content,
|
||||||
|
Signature: sig,
|
||||||
|
}
|
||||||
|
txBytes, err := json.Marshal(tx)
|
||||||
|
handleError(err)
|
||||||
|
hash := sha256.New()
|
||||||
|
hash.Write(txBytes)
|
||||||
|
tx.Hash = fmt.Sprintf("0x%x", string(hash.Sum(nil)))
|
||||||
|
pubBytes, err := hex.DecodeString(key.PublicKey[2:])
|
||||||
|
handleError(err)
|
||||||
|
pk := new(crypto.PublicKey).Uncompress(pubBytes)
|
||||||
|
fmt.Println(hash)
|
||||||
|
err = crypto.ValidateSignature(sig, pk, []byte(fmt.Sprintf("%s%s%s", key.PublicKey, tx.Comment, commit)))
|
||||||
|
if err != nil {
|
||||||
|
fmt.Println("Invalid data submitted")
|
||||||
|
// return nil
|
||||||
|
}
|
||||||
|
txContent := &TransactionContent{
|
||||||
|
Signer: key.PublicKey,
|
||||||
|
Commitment: commit,
|
||||||
|
}
|
||||||
|
contentBytes, err := json.Marshal(txContent)
|
||||||
|
handleError(err)
|
||||||
|
dbTX, err := client.Transactions.Create().
|
||||||
|
SetComment(tx.Comment).
|
||||||
|
SetHash(tx.Hash).
|
||||||
|
SetTimestamp(int(tx.Timestamp)).
|
||||||
|
SetSignature(tx.Signature).
|
||||||
|
SetType(tx.Type).
|
||||||
|
SetContent(contentBytes).
|
||||||
|
Save(context.Background())
|
||||||
|
handleError(err)
|
||||||
|
// db.AddTx(client, txObj)
|
||||||
|
txBytes, err = json.Marshal(tx)
|
||||||
|
handleError(err)
|
||||||
|
hash = sha256.New()
|
||||||
|
hash.Write(txBytes)
|
||||||
|
|
||||||
|
blocks, err := client.Blocks.Query().All(context.Background())
|
||||||
|
handleError(err)
|
||||||
|
var block Block
|
||||||
|
|
||||||
|
if len(blocks) == 0 {
|
||||||
|
block = Block{
|
||||||
|
Hash: fmt.Sprintf("0x%x", string(hash.Sum(nil))),
|
||||||
|
Nonce: 0,
|
||||||
|
Length: 1,
|
||||||
|
PreviousHash: "0x000",
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
lastBlock := blocks[len(blocks)-1]
|
||||||
|
|
||||||
|
block = Block{
|
||||||
|
Hash: fmt.Sprintf("0x%x", string(hash.Sum(nil))),
|
||||||
|
Nonce: lastBlock.ID,
|
||||||
|
Length: 1,
|
||||||
|
PreviousHash: lastBlock.Hash,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
dbBlock, err := client.Blocks.Create().
|
||||||
|
SetHash(block.Hash).
|
||||||
|
SetID(block.Nonce).
|
||||||
|
SetLength(block.Length).
|
||||||
|
SetPreviousHash(block.PreviousHash).
|
||||||
|
Save(context.Background())
|
||||||
|
handleError(err)
|
||||||
|
_, err = dbTX.Update().AddBlock(dbBlock).Save(context.Background())
|
||||||
|
handleError(err)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func handleError(err error) {
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
16
z2/backend/core/transaction.go
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
package core
|
||||||
|
|
||||||
|
type Transaction struct {
|
||||||
|
Type int
|
||||||
|
Timestamp int64
|
||||||
|
Comment string
|
||||||
|
Content []byte
|
||||||
|
Hash string
|
||||||
|
Signature string
|
||||||
|
}
|
||||||
|
|
||||||
|
// {"Signer": 0x0deadbeef, "commitment": 0x0000}
|
||||||
|
type TransactionContent struct {
|
||||||
|
Signer string
|
||||||
|
Commitment string
|
||||||
|
}
|
93
z2/backend/crypto/keys.go
Normal file
@ -0,0 +1,93 @@
|
|||||||
|
package crypto
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/rand"
|
||||||
|
"encoding/hex"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
blst "github.com/supranational/blst/bindings/go"
|
||||||
|
)
|
||||||
|
|
||||||
|
type PublicKey = blst.P1Affine
|
||||||
|
type Signature = blst.P2Affine
|
||||||
|
type AggregateSignature = blst.P2Aggregate
|
||||||
|
type AggregatePublicKey = blst.P1Aggregate
|
||||||
|
|
||||||
|
func GenerateKeys() (*blst.SecretKey, *PublicKey) {
|
||||||
|
var ikm [32]byte
|
||||||
|
_, _ = rand.Read(ikm[:])
|
||||||
|
|
||||||
|
sk := blst.KeyGen(ikm[:])
|
||||||
|
pk := new(PublicKey).From(sk)
|
||||||
|
// fmt.Printf("The public key is: 0x%s\n", hex.EncodeToString(pk.Compress()))
|
||||||
|
// fmt.Printf("The private key is: 0x%s\n", hex.EncodeToString(sk.Serialize()))
|
||||||
|
// sk.Print(private)
|
||||||
|
return sk, pk
|
||||||
|
}
|
||||||
|
|
||||||
|
func ImportKeyFromHex(s string) (*blst.SecretKey, string) {
|
||||||
|
bytesKey, err := hex.DecodeString(s[2:])
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
sk := new(blst.SecretKey).Deserialize(bytesKey)
|
||||||
|
pk := new(PublicKey).From(sk)
|
||||||
|
// fmt.Printf("The public key is: 0x%s\n", hex.EncodeToString(pk.Compress()))
|
||||||
|
return sk, hex.EncodeToString(pk.Compress())
|
||||||
|
}
|
||||||
|
|
||||||
|
func SignMessage(msg string, sk *blst.SecretKey) string {
|
||||||
|
// var dst = []byte("BLS_SIG_BLS12381G2_XMD:SHA-256_SSWU_RO_NUL_")
|
||||||
|
var dst = []byte("BLS_SIG_BLS12381G2_XMD:SHA-256_SSWU_RO_POP_")
|
||||||
|
sig := new(Signature).Sign(sk, []byte(msg), dst)
|
||||||
|
return hex.EncodeToString(sig.Compress())
|
||||||
|
}
|
||||||
|
|
||||||
|
func ValidateSignature(sigTxt string, pk *PublicKey, msg []byte) error {
|
||||||
|
byteSig, err := hex.DecodeString(sigTxt)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
sig := new(Signature).Uncompress(byteSig)
|
||||||
|
var dst = []byte("BLS_SIG_BLS12381G2_XMD:SHA-256_SSWU_RO_POP_")
|
||||||
|
if sig.Verify(false, pk, true, msg, dst) {
|
||||||
|
return nil
|
||||||
|
} else {
|
||||||
|
return errors.New("invalid signature found")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func POC() {
|
||||||
|
|
||||||
|
var keys []*blst.SecretKey
|
||||||
|
var pubs []*PublicKey
|
||||||
|
var sigs []*Signature
|
||||||
|
|
||||||
|
for i := 0; i < 20; i++ {
|
||||||
|
var ikm [32]byte
|
||||||
|
_, _ = rand.Read(ikm[:])
|
||||||
|
sk := blst.KeyGen(ikm[:])
|
||||||
|
keys = append(keys, sk)
|
||||||
|
pk := new(PublicKey).From(sk)
|
||||||
|
pubs = append(pubs, pk)
|
||||||
|
var dst = []byte("BLS_SIG_BLS12381G2_XMD:SHA-256_SSWU_RO_NUL_")
|
||||||
|
msg := []byte("testing bls signatures")
|
||||||
|
sig := new(Signature).Sign(sk, msg, dst)
|
||||||
|
|
||||||
|
if !sig.Verify(true, pk, true, msg, dst) {
|
||||||
|
fmt.Println("ERROR: Invalid signature!")
|
||||||
|
} else {
|
||||||
|
sigs = append(sigs, sig)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
aggPk := new(AggregatePublicKey)
|
||||||
|
aggPk.Aggregate(pubs, false)
|
||||||
|
aggSig := new(AggregateSignature)
|
||||||
|
aggSig.Aggregate(sigs, false)
|
||||||
|
|
||||||
|
fmt.Printf("the pk: 0x%s\n", hex.EncodeToString(aggPk.ToAffine().Compress()))
|
||||||
|
fmt.Printf("the sig: 0x%s\n", hex.EncodeToString(aggSig.ToAffine().Compress()))
|
||||||
|
}
|
65
z2/backend/db/db.go
Normal file
@ -0,0 +1,65 @@
|
|||||||
|
package db
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"thesis/core"
|
||||||
|
"thesis/ent"
|
||||||
|
"thesis/ent/key"
|
||||||
|
)
|
||||||
|
|
||||||
|
func AddTx(client *ent.Client, tx *core.Transaction, signerKey *ent.Key) *ent.Transactions {
|
||||||
|
dbtx, err := client.Transactions.Create().
|
||||||
|
SetType(tx.Type).
|
||||||
|
SetTimestamp(int(tx.Timestamp)).
|
||||||
|
SetComment(tx.Comment).
|
||||||
|
SetContent(tx.Content).
|
||||||
|
SetHash(tx.Hash).
|
||||||
|
SetSignature(tx.Signature).AddSigner(signerKey).Save(context.Background())
|
||||||
|
|
||||||
|
handleError(err)
|
||||||
|
return dbtx
|
||||||
|
}
|
||||||
|
|
||||||
|
func AddBlock(client *ent.Client, block *core.Block) *ent.Blocks {
|
||||||
|
dbBlock, err := client.Blocks.Create().
|
||||||
|
SetHash(block.Hash).
|
||||||
|
SetID(block.Nonce).
|
||||||
|
SetLength(block.Length).
|
||||||
|
SetPreviousHash(block.PreviousHash).
|
||||||
|
Save(context.Background())
|
||||||
|
handleError(err)
|
||||||
|
return dbBlock
|
||||||
|
}
|
||||||
|
|
||||||
|
func AddKey(client *ent.Client, pk string, owner string) error {
|
||||||
|
_, err := client.Key.Create().SetPublicKey(pk).SetOwner(owner).Save(context.Background())
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func GetKeyFromHex(client *ent.Client, pk string) *ent.Key {
|
||||||
|
k, err := client.Key.Query().Where(key.PublicKeyEQ(pk)).All(context.Background())
|
||||||
|
handleError(err)
|
||||||
|
return k[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
func GetTxCount(client *ent.Client) int {
|
||||||
|
count, err := client.Transactions.Query().Count(context.Background())
|
||||||
|
handleError(err)
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
|
func GetLatestBlock(client *ent.Client) *ent.Blocks {
|
||||||
|
blocks, err := client.Blocks.Query().All(context.Background())
|
||||||
|
handleError(err)
|
||||||
|
if len(blocks) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return blocks[len(blocks)-1]
|
||||||
|
}
|
||||||
|
|
||||||
|
func handleError(err error) {
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
167
z2/backend/ent/blocks.go
Normal file
@ -0,0 +1,167 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"thesis/ent/blocks"
|
||||||
|
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Blocks is the model entity for the Blocks schema.
|
||||||
|
type Blocks struct {
|
||||||
|
config `json:"-"`
|
||||||
|
// ID of the ent.
|
||||||
|
ID int `json:"id,omitempty"`
|
||||||
|
// Hash holds the value of the "hash" field.
|
||||||
|
Hash string `json:"hash,omitempty"`
|
||||||
|
// Length holds the value of the "length" field.
|
||||||
|
Length int `json:"length,omitempty"`
|
||||||
|
// PreviousHash holds the value of the "previousHash" field.
|
||||||
|
PreviousHash string `json:"previousHash,omitempty"`
|
||||||
|
// Edges holds the relations/edges for other nodes in the graph.
|
||||||
|
// The values are being populated by the BlocksQuery when eager-loading is set.
|
||||||
|
Edges BlocksEdges `json:"edges"`
|
||||||
|
selectValues sql.SelectValues
|
||||||
|
}
|
||||||
|
|
||||||
|
// BlocksEdges holds the relations/edges for other nodes in the graph.
|
||||||
|
type BlocksEdges struct {
|
||||||
|
// Caster holds the value of the Caster edge.
|
||||||
|
Caster []*Validators `json:"Caster,omitempty"`
|
||||||
|
// MinedTxs holds the value of the MinedTxs edge.
|
||||||
|
MinedTxs []*Transactions `json:"MinedTxs,omitempty"`
|
||||||
|
// loadedTypes holds the information for reporting if a
|
||||||
|
// type was loaded (or requested) in eager-loading or not.
|
||||||
|
loadedTypes [2]bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// CasterOrErr returns the Caster value or an error if the edge
|
||||||
|
// was not loaded in eager-loading.
|
||||||
|
func (e BlocksEdges) CasterOrErr() ([]*Validators, error) {
|
||||||
|
if e.loadedTypes[0] {
|
||||||
|
return e.Caster, nil
|
||||||
|
}
|
||||||
|
return nil, &NotLoadedError{edge: "Caster"}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MinedTxsOrErr returns the MinedTxs value or an error if the edge
|
||||||
|
// was not loaded in eager-loading.
|
||||||
|
func (e BlocksEdges) MinedTxsOrErr() ([]*Transactions, error) {
|
||||||
|
if e.loadedTypes[1] {
|
||||||
|
return e.MinedTxs, nil
|
||||||
|
}
|
||||||
|
return nil, &NotLoadedError{edge: "MinedTxs"}
|
||||||
|
}
|
||||||
|
|
||||||
|
// scanValues returns the types for scanning values from sql.Rows.
|
||||||
|
func (*Blocks) scanValues(columns []string) ([]any, error) {
|
||||||
|
values := make([]any, len(columns))
|
||||||
|
for i := range columns {
|
||||||
|
switch columns[i] {
|
||||||
|
case blocks.FieldID, blocks.FieldLength:
|
||||||
|
values[i] = new(sql.NullInt64)
|
||||||
|
case blocks.FieldHash, blocks.FieldPreviousHash:
|
||||||
|
values[i] = new(sql.NullString)
|
||||||
|
default:
|
||||||
|
values[i] = new(sql.UnknownType)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return values, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// assignValues assigns the values that were returned from sql.Rows (after scanning)
|
||||||
|
// to the Blocks fields.
|
||||||
|
func (b *Blocks) assignValues(columns []string, values []any) error {
|
||||||
|
if m, n := len(values), len(columns); m < n {
|
||||||
|
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
|
||||||
|
}
|
||||||
|
for i := range columns {
|
||||||
|
switch columns[i] {
|
||||||
|
case blocks.FieldID:
|
||||||
|
value, ok := values[i].(*sql.NullInt64)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field id", value)
|
||||||
|
}
|
||||||
|
b.ID = int(value.Int64)
|
||||||
|
case blocks.FieldHash:
|
||||||
|
if value, ok := values[i].(*sql.NullString); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field hash", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
b.Hash = value.String
|
||||||
|
}
|
||||||
|
case blocks.FieldLength:
|
||||||
|
if value, ok := values[i].(*sql.NullInt64); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field length", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
b.Length = int(value.Int64)
|
||||||
|
}
|
||||||
|
case blocks.FieldPreviousHash:
|
||||||
|
if value, ok := values[i].(*sql.NullString); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field previousHash", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
b.PreviousHash = value.String
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
b.selectValues.Set(columns[i], values[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Value returns the ent.Value that was dynamically selected and assigned to the Blocks.
|
||||||
|
// This includes values selected through modifiers, order, etc.
|
||||||
|
func (b *Blocks) Value(name string) (ent.Value, error) {
|
||||||
|
return b.selectValues.Get(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryCaster queries the "Caster" edge of the Blocks entity.
|
||||||
|
func (b *Blocks) QueryCaster() *ValidatorsQuery {
|
||||||
|
return NewBlocksClient(b.config).QueryCaster(b)
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryMinedTxs queries the "MinedTxs" edge of the Blocks entity.
|
||||||
|
func (b *Blocks) QueryMinedTxs() *TransactionsQuery {
|
||||||
|
return NewBlocksClient(b.config).QueryMinedTxs(b)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update returns a builder for updating this Blocks.
|
||||||
|
// Note that you need to call Blocks.Unwrap() before calling this method if this Blocks
|
||||||
|
// was returned from a transaction, and the transaction was committed or rolled back.
|
||||||
|
func (b *Blocks) Update() *BlocksUpdateOne {
|
||||||
|
return NewBlocksClient(b.config).UpdateOne(b)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unwrap unwraps the Blocks entity that was returned from a transaction after it was closed,
|
||||||
|
// so that all future queries will be executed through the driver which created the transaction.
|
||||||
|
func (b *Blocks) Unwrap() *Blocks {
|
||||||
|
_tx, ok := b.config.driver.(*txDriver)
|
||||||
|
if !ok {
|
||||||
|
panic("ent: Blocks is not a transactional entity")
|
||||||
|
}
|
||||||
|
b.config.driver = _tx.drv
|
||||||
|
return b
|
||||||
|
}
|
||||||
|
|
||||||
|
// String implements the fmt.Stringer.
|
||||||
|
func (b *Blocks) String() string {
|
||||||
|
var builder strings.Builder
|
||||||
|
builder.WriteString("Blocks(")
|
||||||
|
builder.WriteString(fmt.Sprintf("id=%v, ", b.ID))
|
||||||
|
builder.WriteString("hash=")
|
||||||
|
builder.WriteString(b.Hash)
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("length=")
|
||||||
|
builder.WriteString(fmt.Sprintf("%v", b.Length))
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("previousHash=")
|
||||||
|
builder.WriteString(b.PreviousHash)
|
||||||
|
builder.WriteByte(')')
|
||||||
|
return builder.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// BlocksSlice is a parsable slice of Blocks.
|
||||||
|
type BlocksSlice []*Blocks
|
128
z2/backend/ent/blocks/blocks.go
Normal file
@ -0,0 +1,128 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package blocks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// Label holds the string label denoting the blocks type in the database.
|
||||||
|
Label = "blocks"
|
||||||
|
// FieldID holds the string denoting the id field in the database.
|
||||||
|
FieldID = "id"
|
||||||
|
// FieldHash holds the string denoting the hash field in the database.
|
||||||
|
FieldHash = "hash"
|
||||||
|
// FieldLength holds the string denoting the length field in the database.
|
||||||
|
FieldLength = "length"
|
||||||
|
// FieldPreviousHash holds the string denoting the previoushash field in the database.
|
||||||
|
FieldPreviousHash = "previous_hash"
|
||||||
|
// EdgeCaster holds the string denoting the caster edge name in mutations.
|
||||||
|
EdgeCaster = "Caster"
|
||||||
|
// EdgeMinedTxs holds the string denoting the minedtxs edge name in mutations.
|
||||||
|
EdgeMinedTxs = "MinedTxs"
|
||||||
|
// Table holds the table name of the blocks in the database.
|
||||||
|
Table = "blocks"
|
||||||
|
// CasterTable is the table that holds the Caster relation/edge.
|
||||||
|
CasterTable = "validators"
|
||||||
|
// CasterInverseTable is the table name for the Validators entity.
|
||||||
|
// It exists in this package in order to avoid circular dependency with the "validators" package.
|
||||||
|
CasterInverseTable = "validators"
|
||||||
|
// CasterColumn is the table column denoting the Caster relation/edge.
|
||||||
|
CasterColumn = "blocks_caster"
|
||||||
|
// MinedTxsTable is the table that holds the MinedTxs relation/edge. The primary key declared below.
|
||||||
|
MinedTxsTable = "blocks_MinedTxs"
|
||||||
|
// MinedTxsInverseTable is the table name for the Transactions entity.
|
||||||
|
// It exists in this package in order to avoid circular dependency with the "transactions" package.
|
||||||
|
MinedTxsInverseTable = "transactions"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Columns holds all SQL columns for blocks fields.
|
||||||
|
var Columns = []string{
|
||||||
|
FieldID,
|
||||||
|
FieldHash,
|
||||||
|
FieldLength,
|
||||||
|
FieldPreviousHash,
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
// MinedTxsPrimaryKey and MinedTxsColumn2 are the table columns denoting the
|
||||||
|
// primary key for the MinedTxs relation (M2M).
|
||||||
|
MinedTxsPrimaryKey = []string{"blocks_id", "transactions_id"}
|
||||||
|
)
|
||||||
|
|
||||||
|
// ValidColumn reports if the column name is valid (part of the table columns).
|
||||||
|
func ValidColumn(column string) bool {
|
||||||
|
for i := range Columns {
|
||||||
|
if column == Columns[i] {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// OrderOption defines the ordering options for the Blocks queries.
|
||||||
|
type OrderOption func(*sql.Selector)
|
||||||
|
|
||||||
|
// ByID orders the results by the id field.
|
||||||
|
func ByID(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldID, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByHash orders the results by the hash field.
|
||||||
|
func ByHash(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldHash, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByLength orders the results by the length field.
|
||||||
|
func ByLength(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldLength, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByPreviousHash orders the results by the previousHash field.
|
||||||
|
func ByPreviousHash(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldPreviousHash, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByCasterCount orders the results by Caster count.
|
||||||
|
func ByCasterCount(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborsCount(s, newCasterStep(), opts...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByCaster orders the results by Caster terms.
|
||||||
|
func ByCaster(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborTerms(s, newCasterStep(), append([]sql.OrderTerm{term}, terms...)...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByMinedTxsCount orders the results by MinedTxs count.
|
||||||
|
func ByMinedTxsCount(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborsCount(s, newMinedTxsStep(), opts...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByMinedTxs orders the results by MinedTxs terms.
|
||||||
|
func ByMinedTxs(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborTerms(s, newMinedTxsStep(), append([]sql.OrderTerm{term}, terms...)...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
func newCasterStep() *sqlgraph.Step {
|
||||||
|
return sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.To(CasterInverseTable, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.O2M, false, CasterTable, CasterColumn),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
func newMinedTxsStep() *sqlgraph.Step {
|
||||||
|
return sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.To(MinedTxsInverseTable, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2M, false, MinedTxsTable, MinedTxsPrimaryKey...),
|
||||||
|
)
|
||||||
|
}
|
301
z2/backend/ent/blocks/where.go
Normal file
@ -0,0 +1,301 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package blocks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"thesis/ent/predicate"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ID filters vertices based on their ID field.
|
||||||
|
func ID(id int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDEQ applies the EQ predicate on the ID field.
|
||||||
|
func IDEQ(id int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDNEQ applies the NEQ predicate on the ID field.
|
||||||
|
func IDNEQ(id int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldNEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDIn applies the In predicate on the ID field.
|
||||||
|
func IDIn(ids ...int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldIn(FieldID, ids...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDNotIn applies the NotIn predicate on the ID field.
|
||||||
|
func IDNotIn(ids ...int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldNotIn(FieldID, ids...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDGT applies the GT predicate on the ID field.
|
||||||
|
func IDGT(id int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldGT(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDGTE applies the GTE predicate on the ID field.
|
||||||
|
func IDGTE(id int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldGTE(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDLT applies the LT predicate on the ID field.
|
||||||
|
func IDLT(id int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldLT(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDLTE applies the LTE predicate on the ID field.
|
||||||
|
func IDLTE(id int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldLTE(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hash applies equality check predicate on the "hash" field. It's identical to HashEQ.
|
||||||
|
func Hash(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldEQ(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Length applies equality check predicate on the "length" field. It's identical to LengthEQ.
|
||||||
|
func Length(v int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldEQ(FieldLength, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHash applies equality check predicate on the "previousHash" field. It's identical to PreviousHashEQ.
|
||||||
|
func PreviousHash(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldEQ(FieldPreviousHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashEQ applies the EQ predicate on the "hash" field.
|
||||||
|
func HashEQ(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldEQ(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashNEQ applies the NEQ predicate on the "hash" field.
|
||||||
|
func HashNEQ(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldNEQ(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashIn applies the In predicate on the "hash" field.
|
||||||
|
func HashIn(vs ...string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldIn(FieldHash, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashNotIn applies the NotIn predicate on the "hash" field.
|
||||||
|
func HashNotIn(vs ...string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldNotIn(FieldHash, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashGT applies the GT predicate on the "hash" field.
|
||||||
|
func HashGT(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldGT(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashGTE applies the GTE predicate on the "hash" field.
|
||||||
|
func HashGTE(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldGTE(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashLT applies the LT predicate on the "hash" field.
|
||||||
|
func HashLT(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldLT(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashLTE applies the LTE predicate on the "hash" field.
|
||||||
|
func HashLTE(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldLTE(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashContains applies the Contains predicate on the "hash" field.
|
||||||
|
func HashContains(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldContains(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashHasPrefix applies the HasPrefix predicate on the "hash" field.
|
||||||
|
func HashHasPrefix(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldHasPrefix(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashHasSuffix applies the HasSuffix predicate on the "hash" field.
|
||||||
|
func HashHasSuffix(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldHasSuffix(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashEqualFold applies the EqualFold predicate on the "hash" field.
|
||||||
|
func HashEqualFold(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldEqualFold(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashContainsFold applies the ContainsFold predicate on the "hash" field.
|
||||||
|
func HashContainsFold(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldContainsFold(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// LengthEQ applies the EQ predicate on the "length" field.
|
||||||
|
func LengthEQ(v int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldEQ(FieldLength, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// LengthNEQ applies the NEQ predicate on the "length" field.
|
||||||
|
func LengthNEQ(v int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldNEQ(FieldLength, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// LengthIn applies the In predicate on the "length" field.
|
||||||
|
func LengthIn(vs ...int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldIn(FieldLength, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// LengthNotIn applies the NotIn predicate on the "length" field.
|
||||||
|
func LengthNotIn(vs ...int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldNotIn(FieldLength, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// LengthGT applies the GT predicate on the "length" field.
|
||||||
|
func LengthGT(v int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldGT(FieldLength, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// LengthGTE applies the GTE predicate on the "length" field.
|
||||||
|
func LengthGTE(v int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldGTE(FieldLength, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// LengthLT applies the LT predicate on the "length" field.
|
||||||
|
func LengthLT(v int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldLT(FieldLength, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// LengthLTE applies the LTE predicate on the "length" field.
|
||||||
|
func LengthLTE(v int) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldLTE(FieldLength, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashEQ applies the EQ predicate on the "previousHash" field.
|
||||||
|
func PreviousHashEQ(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldEQ(FieldPreviousHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashNEQ applies the NEQ predicate on the "previousHash" field.
|
||||||
|
func PreviousHashNEQ(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldNEQ(FieldPreviousHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashIn applies the In predicate on the "previousHash" field.
|
||||||
|
func PreviousHashIn(vs ...string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldIn(FieldPreviousHash, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashNotIn applies the NotIn predicate on the "previousHash" field.
|
||||||
|
func PreviousHashNotIn(vs ...string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldNotIn(FieldPreviousHash, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashGT applies the GT predicate on the "previousHash" field.
|
||||||
|
func PreviousHashGT(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldGT(FieldPreviousHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashGTE applies the GTE predicate on the "previousHash" field.
|
||||||
|
func PreviousHashGTE(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldGTE(FieldPreviousHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashLT applies the LT predicate on the "previousHash" field.
|
||||||
|
func PreviousHashLT(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldLT(FieldPreviousHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashLTE applies the LTE predicate on the "previousHash" field.
|
||||||
|
func PreviousHashLTE(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldLTE(FieldPreviousHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashContains applies the Contains predicate on the "previousHash" field.
|
||||||
|
func PreviousHashContains(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldContains(FieldPreviousHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashHasPrefix applies the HasPrefix predicate on the "previousHash" field.
|
||||||
|
func PreviousHashHasPrefix(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldHasPrefix(FieldPreviousHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashHasSuffix applies the HasSuffix predicate on the "previousHash" field.
|
||||||
|
func PreviousHashHasSuffix(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldHasSuffix(FieldPreviousHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashEqualFold applies the EqualFold predicate on the "previousHash" field.
|
||||||
|
func PreviousHashEqualFold(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldEqualFold(FieldPreviousHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PreviousHashContainsFold applies the ContainsFold predicate on the "previousHash" field.
|
||||||
|
func PreviousHashContainsFold(v string) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.FieldContainsFold(FieldPreviousHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasCaster applies the HasEdge predicate on the "Caster" edge.
|
||||||
|
func HasCaster() predicate.Blocks {
|
||||||
|
return predicate.Blocks(func(s *sql.Selector) {
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.O2M, false, CasterTable, CasterColumn),
|
||||||
|
)
|
||||||
|
sqlgraph.HasNeighbors(s, step)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasCasterWith applies the HasEdge predicate on the "Caster" edge with a given conditions (other predicates).
|
||||||
|
func HasCasterWith(preds ...predicate.Validators) predicate.Blocks {
|
||||||
|
return predicate.Blocks(func(s *sql.Selector) {
|
||||||
|
step := newCasterStep()
|
||||||
|
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
|
||||||
|
for _, p := range preds {
|
||||||
|
p(s)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasMinedTxs applies the HasEdge predicate on the "MinedTxs" edge.
|
||||||
|
func HasMinedTxs() predicate.Blocks {
|
||||||
|
return predicate.Blocks(func(s *sql.Selector) {
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2M, false, MinedTxsTable, MinedTxsPrimaryKey...),
|
||||||
|
)
|
||||||
|
sqlgraph.HasNeighbors(s, step)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasMinedTxsWith applies the HasEdge predicate on the "MinedTxs" edge with a given conditions (other predicates).
|
||||||
|
func HasMinedTxsWith(preds ...predicate.Transactions) predicate.Blocks {
|
||||||
|
return predicate.Blocks(func(s *sql.Selector) {
|
||||||
|
step := newMinedTxsStep()
|
||||||
|
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
|
||||||
|
for _, p := range preds {
|
||||||
|
p(s)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// And groups predicates with the AND operator between them.
|
||||||
|
func And(predicates ...predicate.Blocks) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.AndPredicates(predicates...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Or groups predicates with the OR operator between them.
|
||||||
|
func Or(predicates ...predicate.Blocks) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.OrPredicates(predicates...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Not applies the not operator on the given predicate.
|
||||||
|
func Not(p predicate.Blocks) predicate.Blocks {
|
||||||
|
return predicate.Blocks(sql.NotPredicates(p))
|
||||||
|
}
|
285
z2/backend/ent/blocks_create.go
Normal file
@ -0,0 +1,285 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"thesis/ent/blocks"
|
||||||
|
"thesis/ent/transactions"
|
||||||
|
"thesis/ent/validators"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// BlocksCreate is the builder for creating a Blocks entity.
|
||||||
|
type BlocksCreate struct {
|
||||||
|
config
|
||||||
|
mutation *BlocksMutation
|
||||||
|
hooks []Hook
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetHash sets the "hash" field.
|
||||||
|
func (bc *BlocksCreate) SetHash(s string) *BlocksCreate {
|
||||||
|
bc.mutation.SetHash(s)
|
||||||
|
return bc
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetLength sets the "length" field.
|
||||||
|
func (bc *BlocksCreate) SetLength(i int) *BlocksCreate {
|
||||||
|
bc.mutation.SetLength(i)
|
||||||
|
return bc
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetPreviousHash sets the "previousHash" field.
|
||||||
|
func (bc *BlocksCreate) SetPreviousHash(s string) *BlocksCreate {
|
||||||
|
bc.mutation.SetPreviousHash(s)
|
||||||
|
return bc
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetID sets the "id" field.
|
||||||
|
func (bc *BlocksCreate) SetID(i int) *BlocksCreate {
|
||||||
|
bc.mutation.SetID(i)
|
||||||
|
return bc
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddCasterIDs adds the "Caster" edge to the Validators entity by IDs.
|
||||||
|
func (bc *BlocksCreate) AddCasterIDs(ids ...int) *BlocksCreate {
|
||||||
|
bc.mutation.AddCasterIDs(ids...)
|
||||||
|
return bc
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddCaster adds the "Caster" edges to the Validators entity.
|
||||||
|
func (bc *BlocksCreate) AddCaster(v ...*Validators) *BlocksCreate {
|
||||||
|
ids := make([]int, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return bc.AddCasterIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddMinedTxIDs adds the "MinedTxs" edge to the Transactions entity by IDs.
|
||||||
|
func (bc *BlocksCreate) AddMinedTxIDs(ids ...int) *BlocksCreate {
|
||||||
|
bc.mutation.AddMinedTxIDs(ids...)
|
||||||
|
return bc
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddMinedTxs adds the "MinedTxs" edges to the Transactions entity.
|
||||||
|
func (bc *BlocksCreate) AddMinedTxs(t ...*Transactions) *BlocksCreate {
|
||||||
|
ids := make([]int, len(t))
|
||||||
|
for i := range t {
|
||||||
|
ids[i] = t[i].ID
|
||||||
|
}
|
||||||
|
return bc.AddMinedTxIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mutation returns the BlocksMutation object of the builder.
|
||||||
|
func (bc *BlocksCreate) Mutation() *BlocksMutation {
|
||||||
|
return bc.mutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save creates the Blocks in the database.
|
||||||
|
func (bc *BlocksCreate) Save(ctx context.Context) (*Blocks, error) {
|
||||||
|
return withHooks(ctx, bc.sqlSave, bc.mutation, bc.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX calls Save and panics if Save returns an error.
|
||||||
|
func (bc *BlocksCreate) SaveX(ctx context.Context) *Blocks {
|
||||||
|
v, err := bc.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (bc *BlocksCreate) Exec(ctx context.Context) error {
|
||||||
|
_, err := bc.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (bc *BlocksCreate) ExecX(ctx context.Context) {
|
||||||
|
if err := bc.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// check runs all checks and user-defined validators on the builder.
|
||||||
|
func (bc *BlocksCreate) check() error {
|
||||||
|
if _, ok := bc.mutation.Hash(); !ok {
|
||||||
|
return &ValidationError{Name: "hash", err: errors.New(`ent: missing required field "Blocks.hash"`)}
|
||||||
|
}
|
||||||
|
if _, ok := bc.mutation.Length(); !ok {
|
||||||
|
return &ValidationError{Name: "length", err: errors.New(`ent: missing required field "Blocks.length"`)}
|
||||||
|
}
|
||||||
|
if _, ok := bc.mutation.PreviousHash(); !ok {
|
||||||
|
return &ValidationError{Name: "previousHash", err: errors.New(`ent: missing required field "Blocks.previousHash"`)}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bc *BlocksCreate) sqlSave(ctx context.Context) (*Blocks, error) {
|
||||||
|
if err := bc.check(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
_node, _spec := bc.createSpec()
|
||||||
|
if err := sqlgraph.CreateNode(ctx, bc.driver, _spec); err != nil {
|
||||||
|
if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if _spec.ID.Value != _node.ID {
|
||||||
|
id := _spec.ID.Value.(int64)
|
||||||
|
_node.ID = int(id)
|
||||||
|
}
|
||||||
|
bc.mutation.id = &_node.ID
|
||||||
|
bc.mutation.done = true
|
||||||
|
return _node, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bc *BlocksCreate) createSpec() (*Blocks, *sqlgraph.CreateSpec) {
|
||||||
|
var (
|
||||||
|
_node = &Blocks{config: bc.config}
|
||||||
|
_spec = sqlgraph.NewCreateSpec(blocks.Table, sqlgraph.NewFieldSpec(blocks.FieldID, field.TypeInt))
|
||||||
|
)
|
||||||
|
if id, ok := bc.mutation.ID(); ok {
|
||||||
|
_node.ID = id
|
||||||
|
_spec.ID.Value = id
|
||||||
|
}
|
||||||
|
if value, ok := bc.mutation.Hash(); ok {
|
||||||
|
_spec.SetField(blocks.FieldHash, field.TypeString, value)
|
||||||
|
_node.Hash = value
|
||||||
|
}
|
||||||
|
if value, ok := bc.mutation.Length(); ok {
|
||||||
|
_spec.SetField(blocks.FieldLength, field.TypeInt, value)
|
||||||
|
_node.Length = value
|
||||||
|
}
|
||||||
|
if value, ok := bc.mutation.PreviousHash(); ok {
|
||||||
|
_spec.SetField(blocks.FieldPreviousHash, field.TypeString, value)
|
||||||
|
_node.PreviousHash = value
|
||||||
|
}
|
||||||
|
if nodes := bc.mutation.CasterIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.CasterTable,
|
||||||
|
Columns: []string{blocks.CasterColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(validators.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges = append(_spec.Edges, edge)
|
||||||
|
}
|
||||||
|
if nodes := bc.mutation.MinedTxsIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.MinedTxsTable,
|
||||||
|
Columns: blocks.MinedTxsPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges = append(_spec.Edges, edge)
|
||||||
|
}
|
||||||
|
return _node, _spec
|
||||||
|
}
|
||||||
|
|
||||||
|
// BlocksCreateBulk is the builder for creating many Blocks entities in bulk.
|
||||||
|
type BlocksCreateBulk struct {
|
||||||
|
config
|
||||||
|
err error
|
||||||
|
builders []*BlocksCreate
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save creates the Blocks entities in the database.
|
||||||
|
func (bcb *BlocksCreateBulk) Save(ctx context.Context) ([]*Blocks, error) {
|
||||||
|
if bcb.err != nil {
|
||||||
|
return nil, bcb.err
|
||||||
|
}
|
||||||
|
specs := make([]*sqlgraph.CreateSpec, len(bcb.builders))
|
||||||
|
nodes := make([]*Blocks, len(bcb.builders))
|
||||||
|
mutators := make([]Mutator, len(bcb.builders))
|
||||||
|
for i := range bcb.builders {
|
||||||
|
func(i int, root context.Context) {
|
||||||
|
builder := bcb.builders[i]
|
||||||
|
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
|
||||||
|
mutation, ok := m.(*BlocksMutation)
|
||||||
|
if !ok {
|
||||||
|
return nil, fmt.Errorf("unexpected mutation type %T", m)
|
||||||
|
}
|
||||||
|
if err := builder.check(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
builder.mutation = mutation
|
||||||
|
var err error
|
||||||
|
nodes[i], specs[i] = builder.createSpec()
|
||||||
|
if i < len(mutators)-1 {
|
||||||
|
_, err = mutators[i+1].Mutate(root, bcb.builders[i+1].mutation)
|
||||||
|
} else {
|
||||||
|
spec := &sqlgraph.BatchCreateSpec{Nodes: specs}
|
||||||
|
// Invoke the actual operation on the latest mutation in the chain.
|
||||||
|
if err = sqlgraph.BatchCreate(ctx, bcb.driver, spec); err != nil {
|
||||||
|
if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
mutation.id = &nodes[i].ID
|
||||||
|
if specs[i].ID.Value != nil && nodes[i].ID == 0 {
|
||||||
|
id := specs[i].ID.Value.(int64)
|
||||||
|
nodes[i].ID = int(id)
|
||||||
|
}
|
||||||
|
mutation.done = true
|
||||||
|
return nodes[i], nil
|
||||||
|
})
|
||||||
|
for i := len(builder.hooks) - 1; i >= 0; i-- {
|
||||||
|
mut = builder.hooks[i](mut)
|
||||||
|
}
|
||||||
|
mutators[i] = mut
|
||||||
|
}(i, ctx)
|
||||||
|
}
|
||||||
|
if len(mutators) > 0 {
|
||||||
|
if _, err := mutators[0].Mutate(ctx, bcb.builders[0].mutation); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nodes, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX is like Save, but panics if an error occurs.
|
||||||
|
func (bcb *BlocksCreateBulk) SaveX(ctx context.Context) []*Blocks {
|
||||||
|
v, err := bcb.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (bcb *BlocksCreateBulk) Exec(ctx context.Context) error {
|
||||||
|
_, err := bcb.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (bcb *BlocksCreateBulk) ExecX(ctx context.Context) {
|
||||||
|
if err := bcb.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
88
z2/backend/ent/blocks_delete.go
Normal file
@ -0,0 +1,88 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"thesis/ent/blocks"
|
||||||
|
"thesis/ent/predicate"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// BlocksDelete is the builder for deleting a Blocks entity.
|
||||||
|
type BlocksDelete struct {
|
||||||
|
config
|
||||||
|
hooks []Hook
|
||||||
|
mutation *BlocksMutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the BlocksDelete builder.
|
||||||
|
func (bd *BlocksDelete) Where(ps ...predicate.Blocks) *BlocksDelete {
|
||||||
|
bd.mutation.Where(ps...)
|
||||||
|
return bd
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the deletion query and returns how many vertices were deleted.
|
||||||
|
func (bd *BlocksDelete) Exec(ctx context.Context) (int, error) {
|
||||||
|
return withHooks(ctx, bd.sqlExec, bd.mutation, bd.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (bd *BlocksDelete) ExecX(ctx context.Context) int {
|
||||||
|
n, err := bd.Exec(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bd *BlocksDelete) sqlExec(ctx context.Context) (int, error) {
|
||||||
|
_spec := sqlgraph.NewDeleteSpec(blocks.Table, sqlgraph.NewFieldSpec(blocks.FieldID, field.TypeInt))
|
||||||
|
if ps := bd.mutation.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
affected, err := sqlgraph.DeleteNodes(ctx, bd.driver, _spec)
|
||||||
|
if err != nil && sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
bd.mutation.done = true
|
||||||
|
return affected, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// BlocksDeleteOne is the builder for deleting a single Blocks entity.
|
||||||
|
type BlocksDeleteOne struct {
|
||||||
|
bd *BlocksDelete
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the BlocksDelete builder.
|
||||||
|
func (bdo *BlocksDeleteOne) Where(ps ...predicate.Blocks) *BlocksDeleteOne {
|
||||||
|
bdo.bd.mutation.Where(ps...)
|
||||||
|
return bdo
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the deletion query.
|
||||||
|
func (bdo *BlocksDeleteOne) Exec(ctx context.Context) error {
|
||||||
|
n, err := bdo.bd.Exec(ctx)
|
||||||
|
switch {
|
||||||
|
case err != nil:
|
||||||
|
return err
|
||||||
|
case n == 0:
|
||||||
|
return &NotFoundError{blocks.Label}
|
||||||
|
default:
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (bdo *BlocksDeleteOne) ExecX(ctx context.Context) {
|
||||||
|
if err := bdo.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
711
z2/backend/ent/blocks_query.go
Normal file
@ -0,0 +1,711 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"database/sql/driver"
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
"thesis/ent/blocks"
|
||||||
|
"thesis/ent/predicate"
|
||||||
|
"thesis/ent/transactions"
|
||||||
|
"thesis/ent/validators"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// BlocksQuery is the builder for querying Blocks entities.
|
||||||
|
type BlocksQuery struct {
|
||||||
|
config
|
||||||
|
ctx *QueryContext
|
||||||
|
order []blocks.OrderOption
|
||||||
|
inters []Interceptor
|
||||||
|
predicates []predicate.Blocks
|
||||||
|
withCaster *ValidatorsQuery
|
||||||
|
withMinedTxs *TransactionsQuery
|
||||||
|
// intermediate query (i.e. traversal path).
|
||||||
|
sql *sql.Selector
|
||||||
|
path func(context.Context) (*sql.Selector, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where adds a new predicate for the BlocksQuery builder.
|
||||||
|
func (bq *BlocksQuery) Where(ps ...predicate.Blocks) *BlocksQuery {
|
||||||
|
bq.predicates = append(bq.predicates, ps...)
|
||||||
|
return bq
|
||||||
|
}
|
||||||
|
|
||||||
|
// Limit the number of records to be returned by this query.
|
||||||
|
func (bq *BlocksQuery) Limit(limit int) *BlocksQuery {
|
||||||
|
bq.ctx.Limit = &limit
|
||||||
|
return bq
|
||||||
|
}
|
||||||
|
|
||||||
|
// Offset to start from.
|
||||||
|
func (bq *BlocksQuery) Offset(offset int) *BlocksQuery {
|
||||||
|
bq.ctx.Offset = &offset
|
||||||
|
return bq
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unique configures the query builder to filter duplicate records on query.
|
||||||
|
// By default, unique is set to true, and can be disabled using this method.
|
||||||
|
func (bq *BlocksQuery) Unique(unique bool) *BlocksQuery {
|
||||||
|
bq.ctx.Unique = &unique
|
||||||
|
return bq
|
||||||
|
}
|
||||||
|
|
||||||
|
// Order specifies how the records should be ordered.
|
||||||
|
func (bq *BlocksQuery) Order(o ...blocks.OrderOption) *BlocksQuery {
|
||||||
|
bq.order = append(bq.order, o...)
|
||||||
|
return bq
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryCaster chains the current query on the "Caster" edge.
|
||||||
|
func (bq *BlocksQuery) QueryCaster() *ValidatorsQuery {
|
||||||
|
query := (&ValidatorsClient{config: bq.config}).Query()
|
||||||
|
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
|
||||||
|
if err := bq.prepareQuery(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
selector := bq.sqlQuery(ctx)
|
||||||
|
if err := selector.Err(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(blocks.Table, blocks.FieldID, selector),
|
||||||
|
sqlgraph.To(validators.Table, validators.FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.O2M, false, blocks.CasterTable, blocks.CasterColumn),
|
||||||
|
)
|
||||||
|
fromU = sqlgraph.SetNeighbors(bq.driver.Dialect(), step)
|
||||||
|
return fromU, nil
|
||||||
|
}
|
||||||
|
return query
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryMinedTxs chains the current query on the "MinedTxs" edge.
|
||||||
|
func (bq *BlocksQuery) QueryMinedTxs() *TransactionsQuery {
|
||||||
|
query := (&TransactionsClient{config: bq.config}).Query()
|
||||||
|
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
|
||||||
|
if err := bq.prepareQuery(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
selector := bq.sqlQuery(ctx)
|
||||||
|
if err := selector.Err(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(blocks.Table, blocks.FieldID, selector),
|
||||||
|
sqlgraph.To(transactions.Table, transactions.FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2M, false, blocks.MinedTxsTable, blocks.MinedTxsPrimaryKey...),
|
||||||
|
)
|
||||||
|
fromU = sqlgraph.SetNeighbors(bq.driver.Dialect(), step)
|
||||||
|
return fromU, nil
|
||||||
|
}
|
||||||
|
return query
|
||||||
|
}
|
||||||
|
|
||||||
|
// First returns the first Blocks entity from the query.
|
||||||
|
// Returns a *NotFoundError when no Blocks was found.
|
||||||
|
func (bq *BlocksQuery) First(ctx context.Context) (*Blocks, error) {
|
||||||
|
nodes, err := bq.Limit(1).All(setContextOp(ctx, bq.ctx, "First"))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if len(nodes) == 0 {
|
||||||
|
return nil, &NotFoundError{blocks.Label}
|
||||||
|
}
|
||||||
|
return nodes[0], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FirstX is like First, but panics if an error occurs.
|
||||||
|
func (bq *BlocksQuery) FirstX(ctx context.Context) *Blocks {
|
||||||
|
node, err := bq.First(ctx)
|
||||||
|
if err != nil && !IsNotFound(err) {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
// FirstID returns the first Blocks ID from the query.
|
||||||
|
// Returns a *NotFoundError when no Blocks ID was found.
|
||||||
|
func (bq *BlocksQuery) FirstID(ctx context.Context) (id int, err error) {
|
||||||
|
var ids []int
|
||||||
|
if ids, err = bq.Limit(1).IDs(setContextOp(ctx, bq.ctx, "FirstID")); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if len(ids) == 0 {
|
||||||
|
err = &NotFoundError{blocks.Label}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
return ids[0], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FirstIDX is like FirstID, but panics if an error occurs.
|
||||||
|
func (bq *BlocksQuery) FirstIDX(ctx context.Context) int {
|
||||||
|
id, err := bq.FirstID(ctx)
|
||||||
|
if err != nil && !IsNotFound(err) {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return id
|
||||||
|
}
|
||||||
|
|
||||||
|
// Only returns a single Blocks entity found by the query, ensuring it only returns one.
|
||||||
|
// Returns a *NotSingularError when more than one Blocks entity is found.
|
||||||
|
// Returns a *NotFoundError when no Blocks entities are found.
|
||||||
|
func (bq *BlocksQuery) Only(ctx context.Context) (*Blocks, error) {
|
||||||
|
nodes, err := bq.Limit(2).All(setContextOp(ctx, bq.ctx, "Only"))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
switch len(nodes) {
|
||||||
|
case 1:
|
||||||
|
return nodes[0], nil
|
||||||
|
case 0:
|
||||||
|
return nil, &NotFoundError{blocks.Label}
|
||||||
|
default:
|
||||||
|
return nil, &NotSingularError{blocks.Label}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnlyX is like Only, but panics if an error occurs.
|
||||||
|
func (bq *BlocksQuery) OnlyX(ctx context.Context) *Blocks {
|
||||||
|
node, err := bq.Only(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnlyID is like Only, but returns the only Blocks ID in the query.
|
||||||
|
// Returns a *NotSingularError when more than one Blocks ID is found.
|
||||||
|
// Returns a *NotFoundError when no entities are found.
|
||||||
|
func (bq *BlocksQuery) OnlyID(ctx context.Context) (id int, err error) {
|
||||||
|
var ids []int
|
||||||
|
if ids, err = bq.Limit(2).IDs(setContextOp(ctx, bq.ctx, "OnlyID")); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
switch len(ids) {
|
||||||
|
case 1:
|
||||||
|
id = ids[0]
|
||||||
|
case 0:
|
||||||
|
err = &NotFoundError{blocks.Label}
|
||||||
|
default:
|
||||||
|
err = &NotSingularError{blocks.Label}
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnlyIDX is like OnlyID, but panics if an error occurs.
|
||||||
|
func (bq *BlocksQuery) OnlyIDX(ctx context.Context) int {
|
||||||
|
id, err := bq.OnlyID(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return id
|
||||||
|
}
|
||||||
|
|
||||||
|
// All executes the query and returns a list of BlocksSlice.
|
||||||
|
func (bq *BlocksQuery) All(ctx context.Context) ([]*Blocks, error) {
|
||||||
|
ctx = setContextOp(ctx, bq.ctx, "All")
|
||||||
|
if err := bq.prepareQuery(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
qr := querierAll[[]*Blocks, *BlocksQuery]()
|
||||||
|
return withInterceptors[[]*Blocks](ctx, bq, qr, bq.inters)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AllX is like All, but panics if an error occurs.
|
||||||
|
func (bq *BlocksQuery) AllX(ctx context.Context) []*Blocks {
|
||||||
|
nodes, err := bq.All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return nodes
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDs executes the query and returns a list of Blocks IDs.
|
||||||
|
func (bq *BlocksQuery) IDs(ctx context.Context) (ids []int, err error) {
|
||||||
|
if bq.ctx.Unique == nil && bq.path != nil {
|
||||||
|
bq.Unique(true)
|
||||||
|
}
|
||||||
|
ctx = setContextOp(ctx, bq.ctx, "IDs")
|
||||||
|
if err = bq.Select(blocks.FieldID).Scan(ctx, &ids); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return ids, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDsX is like IDs, but panics if an error occurs.
|
||||||
|
func (bq *BlocksQuery) IDsX(ctx context.Context) []int {
|
||||||
|
ids, err := bq.IDs(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return ids
|
||||||
|
}
|
||||||
|
|
||||||
|
// Count returns the count of the given query.
|
||||||
|
func (bq *BlocksQuery) Count(ctx context.Context) (int, error) {
|
||||||
|
ctx = setContextOp(ctx, bq.ctx, "Count")
|
||||||
|
if err := bq.prepareQuery(ctx); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return withInterceptors[int](ctx, bq, querierCount[*BlocksQuery](), bq.inters)
|
||||||
|
}
|
||||||
|
|
||||||
|
// CountX is like Count, but panics if an error occurs.
|
||||||
|
func (bq *BlocksQuery) CountX(ctx context.Context) int {
|
||||||
|
count, err := bq.Count(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exist returns true if the query has elements in the graph.
|
||||||
|
func (bq *BlocksQuery) Exist(ctx context.Context) (bool, error) {
|
||||||
|
ctx = setContextOp(ctx, bq.ctx, "Exist")
|
||||||
|
switch _, err := bq.FirstID(ctx); {
|
||||||
|
case IsNotFound(err):
|
||||||
|
return false, nil
|
||||||
|
case err != nil:
|
||||||
|
return false, fmt.Errorf("ent: check existence: %w", err)
|
||||||
|
default:
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExistX is like Exist, but panics if an error occurs.
|
||||||
|
func (bq *BlocksQuery) ExistX(ctx context.Context) bool {
|
||||||
|
exist, err := bq.Exist(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return exist
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clone returns a duplicate of the BlocksQuery builder, including all associated steps. It can be
|
||||||
|
// used to prepare common query builders and use them differently after the clone is made.
|
||||||
|
func (bq *BlocksQuery) Clone() *BlocksQuery {
|
||||||
|
if bq == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &BlocksQuery{
|
||||||
|
config: bq.config,
|
||||||
|
ctx: bq.ctx.Clone(),
|
||||||
|
order: append([]blocks.OrderOption{}, bq.order...),
|
||||||
|
inters: append([]Interceptor{}, bq.inters...),
|
||||||
|
predicates: append([]predicate.Blocks{}, bq.predicates...),
|
||||||
|
withCaster: bq.withCaster.Clone(),
|
||||||
|
withMinedTxs: bq.withMinedTxs.Clone(),
|
||||||
|
// clone intermediate query.
|
||||||
|
sql: bq.sql.Clone(),
|
||||||
|
path: bq.path,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithCaster tells the query-builder to eager-load the nodes that are connected to
|
||||||
|
// the "Caster" edge. The optional arguments are used to configure the query builder of the edge.
|
||||||
|
func (bq *BlocksQuery) WithCaster(opts ...func(*ValidatorsQuery)) *BlocksQuery {
|
||||||
|
query := (&ValidatorsClient{config: bq.config}).Query()
|
||||||
|
for _, opt := range opts {
|
||||||
|
opt(query)
|
||||||
|
}
|
||||||
|
bq.withCaster = query
|
||||||
|
return bq
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithMinedTxs tells the query-builder to eager-load the nodes that are connected to
|
||||||
|
// the "MinedTxs" edge. The optional arguments are used to configure the query builder of the edge.
|
||||||
|
func (bq *BlocksQuery) WithMinedTxs(opts ...func(*TransactionsQuery)) *BlocksQuery {
|
||||||
|
query := (&TransactionsClient{config: bq.config}).Query()
|
||||||
|
for _, opt := range opts {
|
||||||
|
opt(query)
|
||||||
|
}
|
||||||
|
bq.withMinedTxs = query
|
||||||
|
return bq
|
||||||
|
}
|
||||||
|
|
||||||
|
// GroupBy is used to group vertices by one or more fields/columns.
|
||||||
|
// It is often used with aggregate functions, like: count, max, mean, min, sum.
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
//
|
||||||
|
// var v []struct {
|
||||||
|
// Hash string `json:"hash,omitempty"`
|
||||||
|
// Count int `json:"count,omitempty"`
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// client.Blocks.Query().
|
||||||
|
// GroupBy(blocks.FieldHash).
|
||||||
|
// Aggregate(ent.Count()).
|
||||||
|
// Scan(ctx, &v)
|
||||||
|
func (bq *BlocksQuery) GroupBy(field string, fields ...string) *BlocksGroupBy {
|
||||||
|
bq.ctx.Fields = append([]string{field}, fields...)
|
||||||
|
grbuild := &BlocksGroupBy{build: bq}
|
||||||
|
grbuild.flds = &bq.ctx.Fields
|
||||||
|
grbuild.label = blocks.Label
|
||||||
|
grbuild.scan = grbuild.Scan
|
||||||
|
return grbuild
|
||||||
|
}
|
||||||
|
|
||||||
|
// Select allows the selection one or more fields/columns for the given query,
|
||||||
|
// instead of selecting all fields in the entity.
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
//
|
||||||
|
// var v []struct {
|
||||||
|
// Hash string `json:"hash,omitempty"`
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// client.Blocks.Query().
|
||||||
|
// Select(blocks.FieldHash).
|
||||||
|
// Scan(ctx, &v)
|
||||||
|
func (bq *BlocksQuery) Select(fields ...string) *BlocksSelect {
|
||||||
|
bq.ctx.Fields = append(bq.ctx.Fields, fields...)
|
||||||
|
sbuild := &BlocksSelect{BlocksQuery: bq}
|
||||||
|
sbuild.label = blocks.Label
|
||||||
|
sbuild.flds, sbuild.scan = &bq.ctx.Fields, sbuild.Scan
|
||||||
|
return sbuild
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate returns a BlocksSelect configured with the given aggregations.
|
||||||
|
func (bq *BlocksQuery) Aggregate(fns ...AggregateFunc) *BlocksSelect {
|
||||||
|
return bq.Select().Aggregate(fns...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bq *BlocksQuery) prepareQuery(ctx context.Context) error {
|
||||||
|
for _, inter := range bq.inters {
|
||||||
|
if inter == nil {
|
||||||
|
return fmt.Errorf("ent: uninitialized interceptor (forgotten import ent/runtime?)")
|
||||||
|
}
|
||||||
|
if trv, ok := inter.(Traverser); ok {
|
||||||
|
if err := trv.Traverse(ctx, bq); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, f := range bq.ctx.Fields {
|
||||||
|
if !blocks.ValidColumn(f) {
|
||||||
|
return &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if bq.path != nil {
|
||||||
|
prev, err := bq.path(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
bq.sql = prev
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bq *BlocksQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*Blocks, error) {
|
||||||
|
var (
|
||||||
|
nodes = []*Blocks{}
|
||||||
|
_spec = bq.querySpec()
|
||||||
|
loadedTypes = [2]bool{
|
||||||
|
bq.withCaster != nil,
|
||||||
|
bq.withMinedTxs != nil,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
_spec.ScanValues = func(columns []string) ([]any, error) {
|
||||||
|
return (*Blocks).scanValues(nil, columns)
|
||||||
|
}
|
||||||
|
_spec.Assign = func(columns []string, values []any) error {
|
||||||
|
node := &Blocks{config: bq.config}
|
||||||
|
nodes = append(nodes, node)
|
||||||
|
node.Edges.loadedTypes = loadedTypes
|
||||||
|
return node.assignValues(columns, values)
|
||||||
|
}
|
||||||
|
for i := range hooks {
|
||||||
|
hooks[i](ctx, _spec)
|
||||||
|
}
|
||||||
|
if err := sqlgraph.QueryNodes(ctx, bq.driver, _spec); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if len(nodes) == 0 {
|
||||||
|
return nodes, nil
|
||||||
|
}
|
||||||
|
if query := bq.withCaster; query != nil {
|
||||||
|
if err := bq.loadCaster(ctx, query, nodes,
|
||||||
|
func(n *Blocks) { n.Edges.Caster = []*Validators{} },
|
||||||
|
func(n *Blocks, e *Validators) { n.Edges.Caster = append(n.Edges.Caster, e) }); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if query := bq.withMinedTxs; query != nil {
|
||||||
|
if err := bq.loadMinedTxs(ctx, query, nodes,
|
||||||
|
func(n *Blocks) { n.Edges.MinedTxs = []*Transactions{} },
|
||||||
|
func(n *Blocks, e *Transactions) { n.Edges.MinedTxs = append(n.Edges.MinedTxs, e) }); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nodes, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bq *BlocksQuery) loadCaster(ctx context.Context, query *ValidatorsQuery, nodes []*Blocks, init func(*Blocks), assign func(*Blocks, *Validators)) error {
|
||||||
|
fks := make([]driver.Value, 0, len(nodes))
|
||||||
|
nodeids := make(map[int]*Blocks)
|
||||||
|
for i := range nodes {
|
||||||
|
fks = append(fks, nodes[i].ID)
|
||||||
|
nodeids[nodes[i].ID] = nodes[i]
|
||||||
|
if init != nil {
|
||||||
|
init(nodes[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
query.withFKs = true
|
||||||
|
query.Where(predicate.Validators(func(s *sql.Selector) {
|
||||||
|
s.Where(sql.InValues(s.C(blocks.CasterColumn), fks...))
|
||||||
|
}))
|
||||||
|
neighbors, err := query.All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, n := range neighbors {
|
||||||
|
fk := n.blocks_caster
|
||||||
|
if fk == nil {
|
||||||
|
return fmt.Errorf(`foreign-key "blocks_caster" is nil for node %v`, n.ID)
|
||||||
|
}
|
||||||
|
node, ok := nodeids[*fk]
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf(`unexpected referenced foreign-key "blocks_caster" returned %v for node %v`, *fk, n.ID)
|
||||||
|
}
|
||||||
|
assign(node, n)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
func (bq *BlocksQuery) loadMinedTxs(ctx context.Context, query *TransactionsQuery, nodes []*Blocks, init func(*Blocks), assign func(*Blocks, *Transactions)) error {
|
||||||
|
edgeIDs := make([]driver.Value, len(nodes))
|
||||||
|
byID := make(map[int]*Blocks)
|
||||||
|
nids := make(map[int]map[*Blocks]struct{})
|
||||||
|
for i, node := range nodes {
|
||||||
|
edgeIDs[i] = node.ID
|
||||||
|
byID[node.ID] = node
|
||||||
|
if init != nil {
|
||||||
|
init(node)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
query.Where(func(s *sql.Selector) {
|
||||||
|
joinT := sql.Table(blocks.MinedTxsTable)
|
||||||
|
s.Join(joinT).On(s.C(transactions.FieldID), joinT.C(blocks.MinedTxsPrimaryKey[1]))
|
||||||
|
s.Where(sql.InValues(joinT.C(blocks.MinedTxsPrimaryKey[0]), edgeIDs...))
|
||||||
|
columns := s.SelectedColumns()
|
||||||
|
s.Select(joinT.C(blocks.MinedTxsPrimaryKey[0]))
|
||||||
|
s.AppendSelect(columns...)
|
||||||
|
s.SetDistinct(false)
|
||||||
|
})
|
||||||
|
if err := query.prepareQuery(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
qr := QuerierFunc(func(ctx context.Context, q Query) (Value, error) {
|
||||||
|
return query.sqlAll(ctx, func(_ context.Context, spec *sqlgraph.QuerySpec) {
|
||||||
|
assign := spec.Assign
|
||||||
|
values := spec.ScanValues
|
||||||
|
spec.ScanValues = func(columns []string) ([]any, error) {
|
||||||
|
values, err := values(columns[1:])
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return append([]any{new(sql.NullInt64)}, values...), nil
|
||||||
|
}
|
||||||
|
spec.Assign = func(columns []string, values []any) error {
|
||||||
|
outValue := int(values[0].(*sql.NullInt64).Int64)
|
||||||
|
inValue := int(values[1].(*sql.NullInt64).Int64)
|
||||||
|
if nids[inValue] == nil {
|
||||||
|
nids[inValue] = map[*Blocks]struct{}{byID[outValue]: {}}
|
||||||
|
return assign(columns[1:], values[1:])
|
||||||
|
}
|
||||||
|
nids[inValue][byID[outValue]] = struct{}{}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
neighbors, err := withInterceptors[[]*Transactions](ctx, query, qr, query.inters)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, n := range neighbors {
|
||||||
|
nodes, ok := nids[n.ID]
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf(`unexpected "MinedTxs" node returned %v`, n.ID)
|
||||||
|
}
|
||||||
|
for kn := range nodes {
|
||||||
|
assign(kn, n)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bq *BlocksQuery) sqlCount(ctx context.Context) (int, error) {
|
||||||
|
_spec := bq.querySpec()
|
||||||
|
_spec.Node.Columns = bq.ctx.Fields
|
||||||
|
if len(bq.ctx.Fields) > 0 {
|
||||||
|
_spec.Unique = bq.ctx.Unique != nil && *bq.ctx.Unique
|
||||||
|
}
|
||||||
|
return sqlgraph.CountNodes(ctx, bq.driver, _spec)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bq *BlocksQuery) querySpec() *sqlgraph.QuerySpec {
|
||||||
|
_spec := sqlgraph.NewQuerySpec(blocks.Table, blocks.Columns, sqlgraph.NewFieldSpec(blocks.FieldID, field.TypeInt))
|
||||||
|
_spec.From = bq.sql
|
||||||
|
if unique := bq.ctx.Unique; unique != nil {
|
||||||
|
_spec.Unique = *unique
|
||||||
|
} else if bq.path != nil {
|
||||||
|
_spec.Unique = true
|
||||||
|
}
|
||||||
|
if fields := bq.ctx.Fields; len(fields) > 0 {
|
||||||
|
_spec.Node.Columns = make([]string, 0, len(fields))
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, blocks.FieldID)
|
||||||
|
for i := range fields {
|
||||||
|
if fields[i] != blocks.FieldID {
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, fields[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if ps := bq.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if limit := bq.ctx.Limit; limit != nil {
|
||||||
|
_spec.Limit = *limit
|
||||||
|
}
|
||||||
|
if offset := bq.ctx.Offset; offset != nil {
|
||||||
|
_spec.Offset = *offset
|
||||||
|
}
|
||||||
|
if ps := bq.order; len(ps) > 0 {
|
||||||
|
_spec.Order = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return _spec
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bq *BlocksQuery) sqlQuery(ctx context.Context) *sql.Selector {
|
||||||
|
builder := sql.Dialect(bq.driver.Dialect())
|
||||||
|
t1 := builder.Table(blocks.Table)
|
||||||
|
columns := bq.ctx.Fields
|
||||||
|
if len(columns) == 0 {
|
||||||
|
columns = blocks.Columns
|
||||||
|
}
|
||||||
|
selector := builder.Select(t1.Columns(columns...)...).From(t1)
|
||||||
|
if bq.sql != nil {
|
||||||
|
selector = bq.sql
|
||||||
|
selector.Select(selector.Columns(columns...)...)
|
||||||
|
}
|
||||||
|
if bq.ctx.Unique != nil && *bq.ctx.Unique {
|
||||||
|
selector.Distinct()
|
||||||
|
}
|
||||||
|
for _, p := range bq.predicates {
|
||||||
|
p(selector)
|
||||||
|
}
|
||||||
|
for _, p := range bq.order {
|
||||||
|
p(selector)
|
||||||
|
}
|
||||||
|
if offset := bq.ctx.Offset; offset != nil {
|
||||||
|
// limit is mandatory for offset clause. We start
|
||||||
|
// with default value, and override it below if needed.
|
||||||
|
selector.Offset(*offset).Limit(math.MaxInt32)
|
||||||
|
}
|
||||||
|
if limit := bq.ctx.Limit; limit != nil {
|
||||||
|
selector.Limit(*limit)
|
||||||
|
}
|
||||||
|
return selector
|
||||||
|
}
|
||||||
|
|
||||||
|
// BlocksGroupBy is the group-by builder for Blocks entities.
|
||||||
|
type BlocksGroupBy struct {
|
||||||
|
selector
|
||||||
|
build *BlocksQuery
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate adds the given aggregation functions to the group-by query.
|
||||||
|
func (bgb *BlocksGroupBy) Aggregate(fns ...AggregateFunc) *BlocksGroupBy {
|
||||||
|
bgb.fns = append(bgb.fns, fns...)
|
||||||
|
return bgb
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scan applies the selector query and scans the result into the given value.
|
||||||
|
func (bgb *BlocksGroupBy) Scan(ctx context.Context, v any) error {
|
||||||
|
ctx = setContextOp(ctx, bgb.build.ctx, "GroupBy")
|
||||||
|
if err := bgb.build.prepareQuery(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return scanWithInterceptors[*BlocksQuery, *BlocksGroupBy](ctx, bgb.build, bgb, bgb.build.inters, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bgb *BlocksGroupBy) sqlScan(ctx context.Context, root *BlocksQuery, v any) error {
|
||||||
|
selector := root.sqlQuery(ctx).Select()
|
||||||
|
aggregation := make([]string, 0, len(bgb.fns))
|
||||||
|
for _, fn := range bgb.fns {
|
||||||
|
aggregation = append(aggregation, fn(selector))
|
||||||
|
}
|
||||||
|
if len(selector.SelectedColumns()) == 0 {
|
||||||
|
columns := make([]string, 0, len(*bgb.flds)+len(bgb.fns))
|
||||||
|
for _, f := range *bgb.flds {
|
||||||
|
columns = append(columns, selector.C(f))
|
||||||
|
}
|
||||||
|
columns = append(columns, aggregation...)
|
||||||
|
selector.Select(columns...)
|
||||||
|
}
|
||||||
|
selector.GroupBy(selector.Columns(*bgb.flds...)...)
|
||||||
|
if err := selector.Err(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
rows := &sql.Rows{}
|
||||||
|
query, args := selector.Query()
|
||||||
|
if err := bgb.build.driver.Query(ctx, query, args, rows); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
return sql.ScanSlice(rows, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
// BlocksSelect is the builder for selecting fields of Blocks entities.
|
||||||
|
type BlocksSelect struct {
|
||||||
|
*BlocksQuery
|
||||||
|
selector
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate adds the given aggregation functions to the selector query.
|
||||||
|
func (bs *BlocksSelect) Aggregate(fns ...AggregateFunc) *BlocksSelect {
|
||||||
|
bs.fns = append(bs.fns, fns...)
|
||||||
|
return bs
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scan applies the selector query and scans the result into the given value.
|
||||||
|
func (bs *BlocksSelect) Scan(ctx context.Context, v any) error {
|
||||||
|
ctx = setContextOp(ctx, bs.ctx, "Select")
|
||||||
|
if err := bs.prepareQuery(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return scanWithInterceptors[*BlocksQuery, *BlocksSelect](ctx, bs.BlocksQuery, bs, bs.inters, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bs *BlocksSelect) sqlScan(ctx context.Context, root *BlocksQuery, v any) error {
|
||||||
|
selector := root.sqlQuery(ctx)
|
||||||
|
aggregation := make([]string, 0, len(bs.fns))
|
||||||
|
for _, fn := range bs.fns {
|
||||||
|
aggregation = append(aggregation, fn(selector))
|
||||||
|
}
|
||||||
|
switch n := len(*bs.selector.flds); {
|
||||||
|
case n == 0 && len(aggregation) > 0:
|
||||||
|
selector.Select(aggregation...)
|
||||||
|
case n != 0 && len(aggregation) > 0:
|
||||||
|
selector.AppendSelect(aggregation...)
|
||||||
|
}
|
||||||
|
rows := &sql.Rows{}
|
||||||
|
query, args := selector.Query()
|
||||||
|
if err := bs.driver.Query(ctx, query, args, rows); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
return sql.ScanSlice(rows, v)
|
||||||
|
}
|
623
z2/backend/ent/blocks_update.go
Normal file
@ -0,0 +1,623 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"thesis/ent/blocks"
|
||||||
|
"thesis/ent/predicate"
|
||||||
|
"thesis/ent/transactions"
|
||||||
|
"thesis/ent/validators"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// BlocksUpdate is the builder for updating Blocks entities.
|
||||||
|
type BlocksUpdate struct {
|
||||||
|
config
|
||||||
|
hooks []Hook
|
||||||
|
mutation *BlocksMutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the BlocksUpdate builder.
|
||||||
|
func (bu *BlocksUpdate) Where(ps ...predicate.Blocks) *BlocksUpdate {
|
||||||
|
bu.mutation.Where(ps...)
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetHash sets the "hash" field.
|
||||||
|
func (bu *BlocksUpdate) SetHash(s string) *BlocksUpdate {
|
||||||
|
bu.mutation.SetHash(s)
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableHash sets the "hash" field if the given value is not nil.
|
||||||
|
func (bu *BlocksUpdate) SetNillableHash(s *string) *BlocksUpdate {
|
||||||
|
if s != nil {
|
||||||
|
bu.SetHash(*s)
|
||||||
|
}
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetLength sets the "length" field.
|
||||||
|
func (bu *BlocksUpdate) SetLength(i int) *BlocksUpdate {
|
||||||
|
bu.mutation.ResetLength()
|
||||||
|
bu.mutation.SetLength(i)
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableLength sets the "length" field if the given value is not nil.
|
||||||
|
func (bu *BlocksUpdate) SetNillableLength(i *int) *BlocksUpdate {
|
||||||
|
if i != nil {
|
||||||
|
bu.SetLength(*i)
|
||||||
|
}
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddLength adds i to the "length" field.
|
||||||
|
func (bu *BlocksUpdate) AddLength(i int) *BlocksUpdate {
|
||||||
|
bu.mutation.AddLength(i)
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetPreviousHash sets the "previousHash" field.
|
||||||
|
func (bu *BlocksUpdate) SetPreviousHash(s string) *BlocksUpdate {
|
||||||
|
bu.mutation.SetPreviousHash(s)
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillablePreviousHash sets the "previousHash" field if the given value is not nil.
|
||||||
|
func (bu *BlocksUpdate) SetNillablePreviousHash(s *string) *BlocksUpdate {
|
||||||
|
if s != nil {
|
||||||
|
bu.SetPreviousHash(*s)
|
||||||
|
}
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddCasterIDs adds the "Caster" edge to the Validators entity by IDs.
|
||||||
|
func (bu *BlocksUpdate) AddCasterIDs(ids ...int) *BlocksUpdate {
|
||||||
|
bu.mutation.AddCasterIDs(ids...)
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddCaster adds the "Caster" edges to the Validators entity.
|
||||||
|
func (bu *BlocksUpdate) AddCaster(v ...*Validators) *BlocksUpdate {
|
||||||
|
ids := make([]int, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return bu.AddCasterIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddMinedTxIDs adds the "MinedTxs" edge to the Transactions entity by IDs.
|
||||||
|
func (bu *BlocksUpdate) AddMinedTxIDs(ids ...int) *BlocksUpdate {
|
||||||
|
bu.mutation.AddMinedTxIDs(ids...)
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddMinedTxs adds the "MinedTxs" edges to the Transactions entity.
|
||||||
|
func (bu *BlocksUpdate) AddMinedTxs(t ...*Transactions) *BlocksUpdate {
|
||||||
|
ids := make([]int, len(t))
|
||||||
|
for i := range t {
|
||||||
|
ids[i] = t[i].ID
|
||||||
|
}
|
||||||
|
return bu.AddMinedTxIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mutation returns the BlocksMutation object of the builder.
|
||||||
|
func (bu *BlocksUpdate) Mutation() *BlocksMutation {
|
||||||
|
return bu.mutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearCaster clears all "Caster" edges to the Validators entity.
|
||||||
|
func (bu *BlocksUpdate) ClearCaster() *BlocksUpdate {
|
||||||
|
bu.mutation.ClearCaster()
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveCasterIDs removes the "Caster" edge to Validators entities by IDs.
|
||||||
|
func (bu *BlocksUpdate) RemoveCasterIDs(ids ...int) *BlocksUpdate {
|
||||||
|
bu.mutation.RemoveCasterIDs(ids...)
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveCaster removes "Caster" edges to Validators entities.
|
||||||
|
func (bu *BlocksUpdate) RemoveCaster(v ...*Validators) *BlocksUpdate {
|
||||||
|
ids := make([]int, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return bu.RemoveCasterIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearMinedTxs clears all "MinedTxs" edges to the Transactions entity.
|
||||||
|
func (bu *BlocksUpdate) ClearMinedTxs() *BlocksUpdate {
|
||||||
|
bu.mutation.ClearMinedTxs()
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveMinedTxIDs removes the "MinedTxs" edge to Transactions entities by IDs.
|
||||||
|
func (bu *BlocksUpdate) RemoveMinedTxIDs(ids ...int) *BlocksUpdate {
|
||||||
|
bu.mutation.RemoveMinedTxIDs(ids...)
|
||||||
|
return bu
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveMinedTxs removes "MinedTxs" edges to Transactions entities.
|
||||||
|
func (bu *BlocksUpdate) RemoveMinedTxs(t ...*Transactions) *BlocksUpdate {
|
||||||
|
ids := make([]int, len(t))
|
||||||
|
for i := range t {
|
||||||
|
ids[i] = t[i].ID
|
||||||
|
}
|
||||||
|
return bu.RemoveMinedTxIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save executes the query and returns the number of nodes affected by the update operation.
|
||||||
|
func (bu *BlocksUpdate) Save(ctx context.Context) (int, error) {
|
||||||
|
return withHooks(ctx, bu.sqlSave, bu.mutation, bu.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX is like Save, but panics if an error occurs.
|
||||||
|
func (bu *BlocksUpdate) SaveX(ctx context.Context) int {
|
||||||
|
affected, err := bu.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return affected
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (bu *BlocksUpdate) Exec(ctx context.Context) error {
|
||||||
|
_, err := bu.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (bu *BlocksUpdate) ExecX(ctx context.Context) {
|
||||||
|
if err := bu.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bu *BlocksUpdate) sqlSave(ctx context.Context) (n int, err error) {
|
||||||
|
_spec := sqlgraph.NewUpdateSpec(blocks.Table, blocks.Columns, sqlgraph.NewFieldSpec(blocks.FieldID, field.TypeInt))
|
||||||
|
if ps := bu.mutation.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if value, ok := bu.mutation.Hash(); ok {
|
||||||
|
_spec.SetField(blocks.FieldHash, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if value, ok := bu.mutation.Length(); ok {
|
||||||
|
_spec.SetField(blocks.FieldLength, field.TypeInt, value)
|
||||||
|
}
|
||||||
|
if value, ok := bu.mutation.AddedLength(); ok {
|
||||||
|
_spec.AddField(blocks.FieldLength, field.TypeInt, value)
|
||||||
|
}
|
||||||
|
if value, ok := bu.mutation.PreviousHash(); ok {
|
||||||
|
_spec.SetField(blocks.FieldPreviousHash, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if bu.mutation.CasterCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.CasterTable,
|
||||||
|
Columns: []string{blocks.CasterColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(validators.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := bu.mutation.RemovedCasterIDs(); len(nodes) > 0 && !bu.mutation.CasterCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.CasterTable,
|
||||||
|
Columns: []string{blocks.CasterColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(validators.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := bu.mutation.CasterIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.CasterTable,
|
||||||
|
Columns: []string{blocks.CasterColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(validators.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
|
if bu.mutation.MinedTxsCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.MinedTxsTable,
|
||||||
|
Columns: blocks.MinedTxsPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := bu.mutation.RemovedMinedTxsIDs(); len(nodes) > 0 && !bu.mutation.MinedTxsCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.MinedTxsTable,
|
||||||
|
Columns: blocks.MinedTxsPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := bu.mutation.MinedTxsIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.MinedTxsTable,
|
||||||
|
Columns: blocks.MinedTxsPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
|
if n, err = sqlgraph.UpdateNodes(ctx, bu.driver, _spec); err != nil {
|
||||||
|
if _, ok := err.(*sqlgraph.NotFoundError); ok {
|
||||||
|
err = &NotFoundError{blocks.Label}
|
||||||
|
} else if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
bu.mutation.done = true
|
||||||
|
return n, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// BlocksUpdateOne is the builder for updating a single Blocks entity.
|
||||||
|
type BlocksUpdateOne struct {
|
||||||
|
config
|
||||||
|
fields []string
|
||||||
|
hooks []Hook
|
||||||
|
mutation *BlocksMutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetHash sets the "hash" field.
|
||||||
|
func (buo *BlocksUpdateOne) SetHash(s string) *BlocksUpdateOne {
|
||||||
|
buo.mutation.SetHash(s)
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableHash sets the "hash" field if the given value is not nil.
|
||||||
|
func (buo *BlocksUpdateOne) SetNillableHash(s *string) *BlocksUpdateOne {
|
||||||
|
if s != nil {
|
||||||
|
buo.SetHash(*s)
|
||||||
|
}
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetLength sets the "length" field.
|
||||||
|
func (buo *BlocksUpdateOne) SetLength(i int) *BlocksUpdateOne {
|
||||||
|
buo.mutation.ResetLength()
|
||||||
|
buo.mutation.SetLength(i)
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableLength sets the "length" field if the given value is not nil.
|
||||||
|
func (buo *BlocksUpdateOne) SetNillableLength(i *int) *BlocksUpdateOne {
|
||||||
|
if i != nil {
|
||||||
|
buo.SetLength(*i)
|
||||||
|
}
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddLength adds i to the "length" field.
|
||||||
|
func (buo *BlocksUpdateOne) AddLength(i int) *BlocksUpdateOne {
|
||||||
|
buo.mutation.AddLength(i)
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetPreviousHash sets the "previousHash" field.
|
||||||
|
func (buo *BlocksUpdateOne) SetPreviousHash(s string) *BlocksUpdateOne {
|
||||||
|
buo.mutation.SetPreviousHash(s)
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillablePreviousHash sets the "previousHash" field if the given value is not nil.
|
||||||
|
func (buo *BlocksUpdateOne) SetNillablePreviousHash(s *string) *BlocksUpdateOne {
|
||||||
|
if s != nil {
|
||||||
|
buo.SetPreviousHash(*s)
|
||||||
|
}
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddCasterIDs adds the "Caster" edge to the Validators entity by IDs.
|
||||||
|
func (buo *BlocksUpdateOne) AddCasterIDs(ids ...int) *BlocksUpdateOne {
|
||||||
|
buo.mutation.AddCasterIDs(ids...)
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddCaster adds the "Caster" edges to the Validators entity.
|
||||||
|
func (buo *BlocksUpdateOne) AddCaster(v ...*Validators) *BlocksUpdateOne {
|
||||||
|
ids := make([]int, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return buo.AddCasterIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddMinedTxIDs adds the "MinedTxs" edge to the Transactions entity by IDs.
|
||||||
|
func (buo *BlocksUpdateOne) AddMinedTxIDs(ids ...int) *BlocksUpdateOne {
|
||||||
|
buo.mutation.AddMinedTxIDs(ids...)
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddMinedTxs adds the "MinedTxs" edges to the Transactions entity.
|
||||||
|
func (buo *BlocksUpdateOne) AddMinedTxs(t ...*Transactions) *BlocksUpdateOne {
|
||||||
|
ids := make([]int, len(t))
|
||||||
|
for i := range t {
|
||||||
|
ids[i] = t[i].ID
|
||||||
|
}
|
||||||
|
return buo.AddMinedTxIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mutation returns the BlocksMutation object of the builder.
|
||||||
|
func (buo *BlocksUpdateOne) Mutation() *BlocksMutation {
|
||||||
|
return buo.mutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearCaster clears all "Caster" edges to the Validators entity.
|
||||||
|
func (buo *BlocksUpdateOne) ClearCaster() *BlocksUpdateOne {
|
||||||
|
buo.mutation.ClearCaster()
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveCasterIDs removes the "Caster" edge to Validators entities by IDs.
|
||||||
|
func (buo *BlocksUpdateOne) RemoveCasterIDs(ids ...int) *BlocksUpdateOne {
|
||||||
|
buo.mutation.RemoveCasterIDs(ids...)
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveCaster removes "Caster" edges to Validators entities.
|
||||||
|
func (buo *BlocksUpdateOne) RemoveCaster(v ...*Validators) *BlocksUpdateOne {
|
||||||
|
ids := make([]int, len(v))
|
||||||
|
for i := range v {
|
||||||
|
ids[i] = v[i].ID
|
||||||
|
}
|
||||||
|
return buo.RemoveCasterIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearMinedTxs clears all "MinedTxs" edges to the Transactions entity.
|
||||||
|
func (buo *BlocksUpdateOne) ClearMinedTxs() *BlocksUpdateOne {
|
||||||
|
buo.mutation.ClearMinedTxs()
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveMinedTxIDs removes the "MinedTxs" edge to Transactions entities by IDs.
|
||||||
|
func (buo *BlocksUpdateOne) RemoveMinedTxIDs(ids ...int) *BlocksUpdateOne {
|
||||||
|
buo.mutation.RemoveMinedTxIDs(ids...)
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveMinedTxs removes "MinedTxs" edges to Transactions entities.
|
||||||
|
func (buo *BlocksUpdateOne) RemoveMinedTxs(t ...*Transactions) *BlocksUpdateOne {
|
||||||
|
ids := make([]int, len(t))
|
||||||
|
for i := range t {
|
||||||
|
ids[i] = t[i].ID
|
||||||
|
}
|
||||||
|
return buo.RemoveMinedTxIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the BlocksUpdate builder.
|
||||||
|
func (buo *BlocksUpdateOne) Where(ps ...predicate.Blocks) *BlocksUpdateOne {
|
||||||
|
buo.mutation.Where(ps...)
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// Select allows selecting one or more fields (columns) of the returned entity.
|
||||||
|
// The default is selecting all fields defined in the entity schema.
|
||||||
|
func (buo *BlocksUpdateOne) Select(field string, fields ...string) *BlocksUpdateOne {
|
||||||
|
buo.fields = append([]string{field}, fields...)
|
||||||
|
return buo
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save executes the query and returns the updated Blocks entity.
|
||||||
|
func (buo *BlocksUpdateOne) Save(ctx context.Context) (*Blocks, error) {
|
||||||
|
return withHooks(ctx, buo.sqlSave, buo.mutation, buo.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX is like Save, but panics if an error occurs.
|
||||||
|
func (buo *BlocksUpdateOne) SaveX(ctx context.Context) *Blocks {
|
||||||
|
node, err := buo.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query on the entity.
|
||||||
|
func (buo *BlocksUpdateOne) Exec(ctx context.Context) error {
|
||||||
|
_, err := buo.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (buo *BlocksUpdateOne) ExecX(ctx context.Context) {
|
||||||
|
if err := buo.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (buo *BlocksUpdateOne) sqlSave(ctx context.Context) (_node *Blocks, err error) {
|
||||||
|
_spec := sqlgraph.NewUpdateSpec(blocks.Table, blocks.Columns, sqlgraph.NewFieldSpec(blocks.FieldID, field.TypeInt))
|
||||||
|
id, ok := buo.mutation.ID()
|
||||||
|
if !ok {
|
||||||
|
return nil, &ValidationError{Name: "id", err: errors.New(`ent: missing "Blocks.id" for update`)}
|
||||||
|
}
|
||||||
|
_spec.Node.ID.Value = id
|
||||||
|
if fields := buo.fields; len(fields) > 0 {
|
||||||
|
_spec.Node.Columns = make([]string, 0, len(fields))
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, blocks.FieldID)
|
||||||
|
for _, f := range fields {
|
||||||
|
if !blocks.ValidColumn(f) {
|
||||||
|
return nil, &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
|
||||||
|
}
|
||||||
|
if f != blocks.FieldID {
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, f)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if ps := buo.mutation.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if value, ok := buo.mutation.Hash(); ok {
|
||||||
|
_spec.SetField(blocks.FieldHash, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if value, ok := buo.mutation.Length(); ok {
|
||||||
|
_spec.SetField(blocks.FieldLength, field.TypeInt, value)
|
||||||
|
}
|
||||||
|
if value, ok := buo.mutation.AddedLength(); ok {
|
||||||
|
_spec.AddField(blocks.FieldLength, field.TypeInt, value)
|
||||||
|
}
|
||||||
|
if value, ok := buo.mutation.PreviousHash(); ok {
|
||||||
|
_spec.SetField(blocks.FieldPreviousHash, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if buo.mutation.CasterCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.CasterTable,
|
||||||
|
Columns: []string{blocks.CasterColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(validators.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := buo.mutation.RemovedCasterIDs(); len(nodes) > 0 && !buo.mutation.CasterCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.CasterTable,
|
||||||
|
Columns: []string{blocks.CasterColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(validators.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := buo.mutation.CasterIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.O2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.CasterTable,
|
||||||
|
Columns: []string{blocks.CasterColumn},
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(validators.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
|
if buo.mutation.MinedTxsCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.MinedTxsTable,
|
||||||
|
Columns: blocks.MinedTxsPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := buo.mutation.RemovedMinedTxsIDs(); len(nodes) > 0 && !buo.mutation.MinedTxsCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.MinedTxsTable,
|
||||||
|
Columns: blocks.MinedTxsPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := buo.mutation.MinedTxsIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: blocks.MinedTxsTable,
|
||||||
|
Columns: blocks.MinedTxsPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
|
_node = &Blocks{config: buo.config}
|
||||||
|
_spec.Assign = _node.assignValues
|
||||||
|
_spec.ScanValues = _node.scanValues
|
||||||
|
if err = sqlgraph.UpdateNode(ctx, buo.driver, _spec); err != nil {
|
||||||
|
if _, ok := err.(*sqlgraph.NotFoundError); ok {
|
||||||
|
err = &NotFoundError{blocks.Label}
|
||||||
|
} else if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
buo.mutation.done = true
|
||||||
|
return _node, nil
|
||||||
|
}
|
1042
z2/backend/ent/client.go
Normal file
616
z2/backend/ent/ent.go
Normal file
@ -0,0 +1,616 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"reflect"
|
||||||
|
"sync"
|
||||||
|
"thesis/ent/blocks"
|
||||||
|
"thesis/ent/key"
|
||||||
|
"thesis/ent/transactions"
|
||||||
|
"thesis/ent/validators"
|
||||||
|
"thesis/ent/whitelist"
|
||||||
|
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ent aliases to avoid import conflicts in user's code.
|
||||||
|
type (
|
||||||
|
Op = ent.Op
|
||||||
|
Hook = ent.Hook
|
||||||
|
Value = ent.Value
|
||||||
|
Query = ent.Query
|
||||||
|
QueryContext = ent.QueryContext
|
||||||
|
Querier = ent.Querier
|
||||||
|
QuerierFunc = ent.QuerierFunc
|
||||||
|
Interceptor = ent.Interceptor
|
||||||
|
InterceptFunc = ent.InterceptFunc
|
||||||
|
Traverser = ent.Traverser
|
||||||
|
TraverseFunc = ent.TraverseFunc
|
||||||
|
Policy = ent.Policy
|
||||||
|
Mutator = ent.Mutator
|
||||||
|
Mutation = ent.Mutation
|
||||||
|
MutateFunc = ent.MutateFunc
|
||||||
|
)
|
||||||
|
|
||||||
|
type clientCtxKey struct{}
|
||||||
|
|
||||||
|
// FromContext returns a Client stored inside a context, or nil if there isn't one.
|
||||||
|
func FromContext(ctx context.Context) *Client {
|
||||||
|
c, _ := ctx.Value(clientCtxKey{}).(*Client)
|
||||||
|
return c
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewContext returns a new context with the given Client attached.
|
||||||
|
func NewContext(parent context.Context, c *Client) context.Context {
|
||||||
|
return context.WithValue(parent, clientCtxKey{}, c)
|
||||||
|
}
|
||||||
|
|
||||||
|
type txCtxKey struct{}
|
||||||
|
|
||||||
|
// TxFromContext returns a Tx stored inside a context, or nil if there isn't one.
|
||||||
|
func TxFromContext(ctx context.Context) *Tx {
|
||||||
|
tx, _ := ctx.Value(txCtxKey{}).(*Tx)
|
||||||
|
return tx
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewTxContext returns a new context with the given Tx attached.
|
||||||
|
func NewTxContext(parent context.Context, tx *Tx) context.Context {
|
||||||
|
return context.WithValue(parent, txCtxKey{}, tx)
|
||||||
|
}
|
||||||
|
|
||||||
|
// OrderFunc applies an ordering on the sql selector.
|
||||||
|
// Deprecated: Use Asc/Desc functions or the package builders instead.
|
||||||
|
type OrderFunc func(*sql.Selector)
|
||||||
|
|
||||||
|
var (
|
||||||
|
initCheck sync.Once
|
||||||
|
columnCheck sql.ColumnCheck
|
||||||
|
)
|
||||||
|
|
||||||
|
// columnChecker checks if the column exists in the given table.
|
||||||
|
func checkColumn(table, column string) error {
|
||||||
|
initCheck.Do(func() {
|
||||||
|
columnCheck = sql.NewColumnCheck(map[string]func(string) bool{
|
||||||
|
blocks.Table: blocks.ValidColumn,
|
||||||
|
key.Table: key.ValidColumn,
|
||||||
|
transactions.Table: transactions.ValidColumn,
|
||||||
|
validators.Table: validators.ValidColumn,
|
||||||
|
whitelist.Table: whitelist.ValidColumn,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
return columnCheck(table, column)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Asc applies the given fields in ASC order.
|
||||||
|
func Asc(fields ...string) func(*sql.Selector) {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
for _, f := range fields {
|
||||||
|
if err := checkColumn(s.TableName(), f); err != nil {
|
||||||
|
s.AddError(&ValidationError{Name: f, err: fmt.Errorf("ent: %w", err)})
|
||||||
|
}
|
||||||
|
s.OrderBy(sql.Asc(s.C(f)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Desc applies the given fields in DESC order.
|
||||||
|
func Desc(fields ...string) func(*sql.Selector) {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
for _, f := range fields {
|
||||||
|
if err := checkColumn(s.TableName(), f); err != nil {
|
||||||
|
s.AddError(&ValidationError{Name: f, err: fmt.Errorf("ent: %w", err)})
|
||||||
|
}
|
||||||
|
s.OrderBy(sql.Desc(s.C(f)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AggregateFunc applies an aggregation step on the group-by traversal/selector.
|
||||||
|
type AggregateFunc func(*sql.Selector) string
|
||||||
|
|
||||||
|
// As is a pseudo aggregation function for renaming another other functions with custom names. For example:
|
||||||
|
//
|
||||||
|
// GroupBy(field1, field2).
|
||||||
|
// Aggregate(ent.As(ent.Sum(field1), "sum_field1"), (ent.As(ent.Sum(field2), "sum_field2")).
|
||||||
|
// Scan(ctx, &v)
|
||||||
|
func As(fn AggregateFunc, end string) AggregateFunc {
|
||||||
|
return func(s *sql.Selector) string {
|
||||||
|
return sql.As(fn(s), end)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Count applies the "count" aggregation function on each group.
|
||||||
|
func Count() AggregateFunc {
|
||||||
|
return func(s *sql.Selector) string {
|
||||||
|
return sql.Count("*")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Max applies the "max" aggregation function on the given field of each group.
|
||||||
|
func Max(field string) AggregateFunc {
|
||||||
|
return func(s *sql.Selector) string {
|
||||||
|
if err := checkColumn(s.TableName(), field); err != nil {
|
||||||
|
s.AddError(&ValidationError{Name: field, err: fmt.Errorf("ent: %w", err)})
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return sql.Max(s.C(field))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mean applies the "mean" aggregation function on the given field of each group.
|
||||||
|
func Mean(field string) AggregateFunc {
|
||||||
|
return func(s *sql.Selector) string {
|
||||||
|
if err := checkColumn(s.TableName(), field); err != nil {
|
||||||
|
s.AddError(&ValidationError{Name: field, err: fmt.Errorf("ent: %w", err)})
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return sql.Avg(s.C(field))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Min applies the "min" aggregation function on the given field of each group.
|
||||||
|
func Min(field string) AggregateFunc {
|
||||||
|
return func(s *sql.Selector) string {
|
||||||
|
if err := checkColumn(s.TableName(), field); err != nil {
|
||||||
|
s.AddError(&ValidationError{Name: field, err: fmt.Errorf("ent: %w", err)})
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return sql.Min(s.C(field))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sum applies the "sum" aggregation function on the given field of each group.
|
||||||
|
func Sum(field string) AggregateFunc {
|
||||||
|
return func(s *sql.Selector) string {
|
||||||
|
if err := checkColumn(s.TableName(), field); err != nil {
|
||||||
|
s.AddError(&ValidationError{Name: field, err: fmt.Errorf("ent: %w", err)})
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return sql.Sum(s.C(field))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ValidationError returns when validating a field or edge fails.
|
||||||
|
type ValidationError struct {
|
||||||
|
Name string // Field or edge name.
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
// Error implements the error interface.
|
||||||
|
func (e *ValidationError) Error() string {
|
||||||
|
return e.err.Error()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unwrap implements the errors.Wrapper interface.
|
||||||
|
func (e *ValidationError) Unwrap() error {
|
||||||
|
return e.err
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsValidationError returns a boolean indicating whether the error is a validation error.
|
||||||
|
func IsValidationError(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
var e *ValidationError
|
||||||
|
return errors.As(err, &e)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NotFoundError returns when trying to fetch a specific entity and it was not found in the database.
|
||||||
|
type NotFoundError struct {
|
||||||
|
label string
|
||||||
|
}
|
||||||
|
|
||||||
|
// Error implements the error interface.
|
||||||
|
func (e *NotFoundError) Error() string {
|
||||||
|
return "ent: " + e.label + " not found"
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsNotFound returns a boolean indicating whether the error is a not found error.
|
||||||
|
func IsNotFound(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
var e *NotFoundError
|
||||||
|
return errors.As(err, &e)
|
||||||
|
}
|
||||||
|
|
||||||
|
// MaskNotFound masks not found error.
|
||||||
|
func MaskNotFound(err error) error {
|
||||||
|
if IsNotFound(err) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// NotSingularError returns when trying to fetch a singular entity and more then one was found in the database.
|
||||||
|
type NotSingularError struct {
|
||||||
|
label string
|
||||||
|
}
|
||||||
|
|
||||||
|
// Error implements the error interface.
|
||||||
|
func (e *NotSingularError) Error() string {
|
||||||
|
return "ent: " + e.label + " not singular"
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsNotSingular returns a boolean indicating whether the error is a not singular error.
|
||||||
|
func IsNotSingular(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
var e *NotSingularError
|
||||||
|
return errors.As(err, &e)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NotLoadedError returns when trying to get a node that was not loaded by the query.
|
||||||
|
type NotLoadedError struct {
|
||||||
|
edge string
|
||||||
|
}
|
||||||
|
|
||||||
|
// Error implements the error interface.
|
||||||
|
func (e *NotLoadedError) Error() string {
|
||||||
|
return "ent: " + e.edge + " edge was not loaded"
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsNotLoaded returns a boolean indicating whether the error is a not loaded error.
|
||||||
|
func IsNotLoaded(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
var e *NotLoadedError
|
||||||
|
return errors.As(err, &e)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConstraintError returns when trying to create/update one or more entities and
|
||||||
|
// one or more of their constraints failed. For example, violation of edge or
|
||||||
|
// field uniqueness.
|
||||||
|
type ConstraintError struct {
|
||||||
|
msg string
|
||||||
|
wrap error
|
||||||
|
}
|
||||||
|
|
||||||
|
// Error implements the error interface.
|
||||||
|
func (e ConstraintError) Error() string {
|
||||||
|
return "ent: constraint failed: " + e.msg
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unwrap implements the errors.Wrapper interface.
|
||||||
|
func (e *ConstraintError) Unwrap() error {
|
||||||
|
return e.wrap
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsConstraintError returns a boolean indicating whether the error is a constraint failure.
|
||||||
|
func IsConstraintError(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
var e *ConstraintError
|
||||||
|
return errors.As(err, &e)
|
||||||
|
}
|
||||||
|
|
||||||
|
// selector embedded by the different Select/GroupBy builders.
|
||||||
|
type selector struct {
|
||||||
|
label string
|
||||||
|
flds *[]string
|
||||||
|
fns []AggregateFunc
|
||||||
|
scan func(context.Context, any) error
|
||||||
|
}
|
||||||
|
|
||||||
|
// ScanX is like Scan, but panics if an error occurs.
|
||||||
|
func (s *selector) ScanX(ctx context.Context, v any) {
|
||||||
|
if err := s.scan(ctx, v); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strings returns list of strings from a selector. It is only allowed when selecting one field.
|
||||||
|
func (s *selector) Strings(ctx context.Context) ([]string, error) {
|
||||||
|
if len(*s.flds) > 1 {
|
||||||
|
return nil, errors.New("ent: Strings is not achievable when selecting more than 1 field")
|
||||||
|
}
|
||||||
|
var v []string
|
||||||
|
if err := s.scan(ctx, &v); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return v, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// StringsX is like Strings, but panics if an error occurs.
|
||||||
|
func (s *selector) StringsX(ctx context.Context) []string {
|
||||||
|
v, err := s.Strings(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns a single string from a selector. It is only allowed when selecting one field.
|
||||||
|
func (s *selector) String(ctx context.Context) (_ string, err error) {
|
||||||
|
var v []string
|
||||||
|
if v, err = s.Strings(ctx); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
switch len(v) {
|
||||||
|
case 1:
|
||||||
|
return v[0], nil
|
||||||
|
case 0:
|
||||||
|
err = &NotFoundError{s.label}
|
||||||
|
default:
|
||||||
|
err = fmt.Errorf("ent: Strings returned %d results when one was expected", len(v))
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// StringX is like String, but panics if an error occurs.
|
||||||
|
func (s *selector) StringX(ctx context.Context) string {
|
||||||
|
v, err := s.String(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ints returns list of ints from a selector. It is only allowed when selecting one field.
|
||||||
|
func (s *selector) Ints(ctx context.Context) ([]int, error) {
|
||||||
|
if len(*s.flds) > 1 {
|
||||||
|
return nil, errors.New("ent: Ints is not achievable when selecting more than 1 field")
|
||||||
|
}
|
||||||
|
var v []int
|
||||||
|
if err := s.scan(ctx, &v); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return v, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IntsX is like Ints, but panics if an error occurs.
|
||||||
|
func (s *selector) IntsX(ctx context.Context) []int {
|
||||||
|
v, err := s.Ints(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Int returns a single int from a selector. It is only allowed when selecting one field.
|
||||||
|
func (s *selector) Int(ctx context.Context) (_ int, err error) {
|
||||||
|
var v []int
|
||||||
|
if v, err = s.Ints(ctx); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
switch len(v) {
|
||||||
|
case 1:
|
||||||
|
return v[0], nil
|
||||||
|
case 0:
|
||||||
|
err = &NotFoundError{s.label}
|
||||||
|
default:
|
||||||
|
err = fmt.Errorf("ent: Ints returned %d results when one was expected", len(v))
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// IntX is like Int, but panics if an error occurs.
|
||||||
|
func (s *selector) IntX(ctx context.Context) int {
|
||||||
|
v, err := s.Int(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Float64s returns list of float64s from a selector. It is only allowed when selecting one field.
|
||||||
|
func (s *selector) Float64s(ctx context.Context) ([]float64, error) {
|
||||||
|
if len(*s.flds) > 1 {
|
||||||
|
return nil, errors.New("ent: Float64s is not achievable when selecting more than 1 field")
|
||||||
|
}
|
||||||
|
var v []float64
|
||||||
|
if err := s.scan(ctx, &v); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return v, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Float64sX is like Float64s, but panics if an error occurs.
|
||||||
|
func (s *selector) Float64sX(ctx context.Context) []float64 {
|
||||||
|
v, err := s.Float64s(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Float64 returns a single float64 from a selector. It is only allowed when selecting one field.
|
||||||
|
func (s *selector) Float64(ctx context.Context) (_ float64, err error) {
|
||||||
|
var v []float64
|
||||||
|
if v, err = s.Float64s(ctx); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
switch len(v) {
|
||||||
|
case 1:
|
||||||
|
return v[0], nil
|
||||||
|
case 0:
|
||||||
|
err = &NotFoundError{s.label}
|
||||||
|
default:
|
||||||
|
err = fmt.Errorf("ent: Float64s returned %d results when one was expected", len(v))
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Float64X is like Float64, but panics if an error occurs.
|
||||||
|
func (s *selector) Float64X(ctx context.Context) float64 {
|
||||||
|
v, err := s.Float64(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Bools returns list of bools from a selector. It is only allowed when selecting one field.
|
||||||
|
func (s *selector) Bools(ctx context.Context) ([]bool, error) {
|
||||||
|
if len(*s.flds) > 1 {
|
||||||
|
return nil, errors.New("ent: Bools is not achievable when selecting more than 1 field")
|
||||||
|
}
|
||||||
|
var v []bool
|
||||||
|
if err := s.scan(ctx, &v); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return v, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// BoolsX is like Bools, but panics if an error occurs.
|
||||||
|
func (s *selector) BoolsX(ctx context.Context) []bool {
|
||||||
|
v, err := s.Bools(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Bool returns a single bool from a selector. It is only allowed when selecting one field.
|
||||||
|
func (s *selector) Bool(ctx context.Context) (_ bool, err error) {
|
||||||
|
var v []bool
|
||||||
|
if v, err = s.Bools(ctx); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
switch len(v) {
|
||||||
|
case 1:
|
||||||
|
return v[0], nil
|
||||||
|
case 0:
|
||||||
|
err = &NotFoundError{s.label}
|
||||||
|
default:
|
||||||
|
err = fmt.Errorf("ent: Bools returned %d results when one was expected", len(v))
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// BoolX is like Bool, but panics if an error occurs.
|
||||||
|
func (s *selector) BoolX(ctx context.Context) bool {
|
||||||
|
v, err := s.Bool(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// withHooks invokes the builder operation with the given hooks, if any.
|
||||||
|
func withHooks[V Value, M any, PM interface {
|
||||||
|
*M
|
||||||
|
Mutation
|
||||||
|
}](ctx context.Context, exec func(context.Context) (V, error), mutation PM, hooks []Hook) (value V, err error) {
|
||||||
|
if len(hooks) == 0 {
|
||||||
|
return exec(ctx)
|
||||||
|
}
|
||||||
|
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
|
||||||
|
mutationT, ok := any(m).(PM)
|
||||||
|
if !ok {
|
||||||
|
return nil, fmt.Errorf("unexpected mutation type %T", m)
|
||||||
|
}
|
||||||
|
// Set the mutation to the builder.
|
||||||
|
*mutation = *mutationT
|
||||||
|
return exec(ctx)
|
||||||
|
})
|
||||||
|
for i := len(hooks) - 1; i >= 0; i-- {
|
||||||
|
if hooks[i] == nil {
|
||||||
|
return value, fmt.Errorf("ent: uninitialized hook (forgotten import ent/runtime?)")
|
||||||
|
}
|
||||||
|
mut = hooks[i](mut)
|
||||||
|
}
|
||||||
|
v, err := mut.Mutate(ctx, mutation)
|
||||||
|
if err != nil {
|
||||||
|
return value, err
|
||||||
|
}
|
||||||
|
nv, ok := v.(V)
|
||||||
|
if !ok {
|
||||||
|
return value, fmt.Errorf("unexpected node type %T returned from %T", v, mutation)
|
||||||
|
}
|
||||||
|
return nv, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// setContextOp returns a new context with the given QueryContext attached (including its op) in case it does not exist.
|
||||||
|
func setContextOp(ctx context.Context, qc *QueryContext, op string) context.Context {
|
||||||
|
if ent.QueryFromContext(ctx) == nil {
|
||||||
|
qc.Op = op
|
||||||
|
ctx = ent.NewQueryContext(ctx, qc)
|
||||||
|
}
|
||||||
|
return ctx
|
||||||
|
}
|
||||||
|
|
||||||
|
func querierAll[V Value, Q interface {
|
||||||
|
sqlAll(context.Context, ...queryHook) (V, error)
|
||||||
|
}]() Querier {
|
||||||
|
return QuerierFunc(func(ctx context.Context, q Query) (Value, error) {
|
||||||
|
query, ok := q.(Q)
|
||||||
|
if !ok {
|
||||||
|
return nil, fmt.Errorf("unexpected query type %T", q)
|
||||||
|
}
|
||||||
|
return query.sqlAll(ctx)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func querierCount[Q interface {
|
||||||
|
sqlCount(context.Context) (int, error)
|
||||||
|
}]() Querier {
|
||||||
|
return QuerierFunc(func(ctx context.Context, q Query) (Value, error) {
|
||||||
|
query, ok := q.(Q)
|
||||||
|
if !ok {
|
||||||
|
return nil, fmt.Errorf("unexpected query type %T", q)
|
||||||
|
}
|
||||||
|
return query.sqlCount(ctx)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func withInterceptors[V Value](ctx context.Context, q Query, qr Querier, inters []Interceptor) (v V, err error) {
|
||||||
|
for i := len(inters) - 1; i >= 0; i-- {
|
||||||
|
qr = inters[i].Intercept(qr)
|
||||||
|
}
|
||||||
|
rv, err := qr.Query(ctx, q)
|
||||||
|
if err != nil {
|
||||||
|
return v, err
|
||||||
|
}
|
||||||
|
vt, ok := rv.(V)
|
||||||
|
if !ok {
|
||||||
|
return v, fmt.Errorf("unexpected type %T returned from %T. expected type: %T", vt, q, v)
|
||||||
|
}
|
||||||
|
return vt, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func scanWithInterceptors[Q1 ent.Query, Q2 interface {
|
||||||
|
sqlScan(context.Context, Q1, any) error
|
||||||
|
}](ctx context.Context, rootQuery Q1, selectOrGroup Q2, inters []Interceptor, v any) error {
|
||||||
|
rv := reflect.ValueOf(v)
|
||||||
|
var qr Querier = QuerierFunc(func(ctx context.Context, q Query) (Value, error) {
|
||||||
|
query, ok := q.(Q1)
|
||||||
|
if !ok {
|
||||||
|
return nil, fmt.Errorf("unexpected query type %T", q)
|
||||||
|
}
|
||||||
|
if err := selectOrGroup.sqlScan(ctx, query, v); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if k := rv.Kind(); k == reflect.Pointer && rv.Elem().CanInterface() {
|
||||||
|
return rv.Elem().Interface(), nil
|
||||||
|
}
|
||||||
|
return v, nil
|
||||||
|
})
|
||||||
|
for i := len(inters) - 1; i >= 0; i-- {
|
||||||
|
qr = inters[i].Intercept(qr)
|
||||||
|
}
|
||||||
|
vv, err := qr.Query(ctx, rootQuery)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
switch rv2 := reflect.ValueOf(vv); {
|
||||||
|
case rv.IsNil(), rv2.IsNil(), rv.Kind() != reflect.Pointer:
|
||||||
|
case rv.Type() == rv2.Type():
|
||||||
|
rv.Elem().Set(rv2.Elem())
|
||||||
|
case rv.Elem().Type() == rv2.Type():
|
||||||
|
rv.Elem().Set(rv2)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// queryHook describes an internal hook for the different sqlAll methods.
|
||||||
|
type queryHook func(context.Context, *sqlgraph.QuerySpec)
|
84
z2/backend/ent/enttest/enttest.go
Normal file
@ -0,0 +1,84 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package enttest
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"thesis/ent"
|
||||||
|
// required by schema hooks.
|
||||||
|
_ "thesis/ent/runtime"
|
||||||
|
|
||||||
|
"thesis/ent/migrate"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql/schema"
|
||||||
|
)
|
||||||
|
|
||||||
|
type (
|
||||||
|
// TestingT is the interface that is shared between
|
||||||
|
// testing.T and testing.B and used by enttest.
|
||||||
|
TestingT interface {
|
||||||
|
FailNow()
|
||||||
|
Error(...any)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Option configures client creation.
|
||||||
|
Option func(*options)
|
||||||
|
|
||||||
|
options struct {
|
||||||
|
opts []ent.Option
|
||||||
|
migrateOpts []schema.MigrateOption
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
// WithOptions forwards options to client creation.
|
||||||
|
func WithOptions(opts ...ent.Option) Option {
|
||||||
|
return func(o *options) {
|
||||||
|
o.opts = append(o.opts, opts...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithMigrateOptions forwards options to auto migration.
|
||||||
|
func WithMigrateOptions(opts ...schema.MigrateOption) Option {
|
||||||
|
return func(o *options) {
|
||||||
|
o.migrateOpts = append(o.migrateOpts, opts...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func newOptions(opts []Option) *options {
|
||||||
|
o := &options{}
|
||||||
|
for _, opt := range opts {
|
||||||
|
opt(o)
|
||||||
|
}
|
||||||
|
return o
|
||||||
|
}
|
||||||
|
|
||||||
|
// Open calls ent.Open and auto-run migration.
|
||||||
|
func Open(t TestingT, driverName, dataSourceName string, opts ...Option) *ent.Client {
|
||||||
|
o := newOptions(opts)
|
||||||
|
c, err := ent.Open(driverName, dataSourceName, o.opts...)
|
||||||
|
if err != nil {
|
||||||
|
t.Error(err)
|
||||||
|
t.FailNow()
|
||||||
|
}
|
||||||
|
migrateSchema(t, c, o)
|
||||||
|
return c
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewClient calls ent.NewClient and auto-run migration.
|
||||||
|
func NewClient(t TestingT, opts ...Option) *ent.Client {
|
||||||
|
o := newOptions(opts)
|
||||||
|
c := ent.NewClient(o.opts...)
|
||||||
|
migrateSchema(t, c, o)
|
||||||
|
return c
|
||||||
|
}
|
||||||
|
func migrateSchema(t TestingT, c *ent.Client, o *options) {
|
||||||
|
tables, err := schema.CopyTables(migrate.Tables)
|
||||||
|
if err != nil {
|
||||||
|
t.Error(err)
|
||||||
|
t.FailNow()
|
||||||
|
}
|
||||||
|
if err := migrate.Create(context.Background(), c.Schema, tables, o.migrateOpts...); err != nil {
|
||||||
|
t.Error(err)
|
||||||
|
t.FailNow()
|
||||||
|
}
|
||||||
|
}
|
3
z2/backend/ent/generate.go
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
package ent
|
||||||
|
|
||||||
|
//go:generate go run -mod=mod entgo.io/ent/cmd/ent generate ./schema
|
246
z2/backend/ent/hook/hook.go
Normal file
@ -0,0 +1,246 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package hook
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"thesis/ent"
|
||||||
|
)
|
||||||
|
|
||||||
|
// The BlocksFunc type is an adapter to allow the use of ordinary
|
||||||
|
// function as Blocks mutator.
|
||||||
|
type BlocksFunc func(context.Context, *ent.BlocksMutation) (ent.Value, error)
|
||||||
|
|
||||||
|
// Mutate calls f(ctx, m).
|
||||||
|
func (f BlocksFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
|
||||||
|
if mv, ok := m.(*ent.BlocksMutation); ok {
|
||||||
|
return f(ctx, mv)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.BlocksMutation", m)
|
||||||
|
}
|
||||||
|
|
||||||
|
// The KeyFunc type is an adapter to allow the use of ordinary
|
||||||
|
// function as Key mutator.
|
||||||
|
type KeyFunc func(context.Context, *ent.KeyMutation) (ent.Value, error)
|
||||||
|
|
||||||
|
// Mutate calls f(ctx, m).
|
||||||
|
func (f KeyFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
|
||||||
|
if mv, ok := m.(*ent.KeyMutation); ok {
|
||||||
|
return f(ctx, mv)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.KeyMutation", m)
|
||||||
|
}
|
||||||
|
|
||||||
|
// The TransactionsFunc type is an adapter to allow the use of ordinary
|
||||||
|
// function as Transactions mutator.
|
||||||
|
type TransactionsFunc func(context.Context, *ent.TransactionsMutation) (ent.Value, error)
|
||||||
|
|
||||||
|
// Mutate calls f(ctx, m).
|
||||||
|
func (f TransactionsFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
|
||||||
|
if mv, ok := m.(*ent.TransactionsMutation); ok {
|
||||||
|
return f(ctx, mv)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.TransactionsMutation", m)
|
||||||
|
}
|
||||||
|
|
||||||
|
// The ValidatorsFunc type is an adapter to allow the use of ordinary
|
||||||
|
// function as Validators mutator.
|
||||||
|
type ValidatorsFunc func(context.Context, *ent.ValidatorsMutation) (ent.Value, error)
|
||||||
|
|
||||||
|
// Mutate calls f(ctx, m).
|
||||||
|
func (f ValidatorsFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
|
||||||
|
if mv, ok := m.(*ent.ValidatorsMutation); ok {
|
||||||
|
return f(ctx, mv)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.ValidatorsMutation", m)
|
||||||
|
}
|
||||||
|
|
||||||
|
// The WhiteListFunc type is an adapter to allow the use of ordinary
|
||||||
|
// function as WhiteList mutator.
|
||||||
|
type WhiteListFunc func(context.Context, *ent.WhiteListMutation) (ent.Value, error)
|
||||||
|
|
||||||
|
// Mutate calls f(ctx, m).
|
||||||
|
func (f WhiteListFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
|
||||||
|
if mv, ok := m.(*ent.WhiteListMutation); ok {
|
||||||
|
return f(ctx, mv)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.WhiteListMutation", m)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Condition is a hook condition function.
|
||||||
|
type Condition func(context.Context, ent.Mutation) bool
|
||||||
|
|
||||||
|
// And groups conditions with the AND operator.
|
||||||
|
func And(first, second Condition, rest ...Condition) Condition {
|
||||||
|
return func(ctx context.Context, m ent.Mutation) bool {
|
||||||
|
if !first(ctx, m) || !second(ctx, m) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for _, cond := range rest {
|
||||||
|
if !cond(ctx, m) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Or groups conditions with the OR operator.
|
||||||
|
func Or(first, second Condition, rest ...Condition) Condition {
|
||||||
|
return func(ctx context.Context, m ent.Mutation) bool {
|
||||||
|
if first(ctx, m) || second(ctx, m) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
for _, cond := range rest {
|
||||||
|
if cond(ctx, m) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Not negates a given condition.
|
||||||
|
func Not(cond Condition) Condition {
|
||||||
|
return func(ctx context.Context, m ent.Mutation) bool {
|
||||||
|
return !cond(ctx, m)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasOp is a condition testing mutation operation.
|
||||||
|
func HasOp(op ent.Op) Condition {
|
||||||
|
return func(_ context.Context, m ent.Mutation) bool {
|
||||||
|
return m.Op().Is(op)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasAddedFields is a condition validating `.AddedField` on fields.
|
||||||
|
func HasAddedFields(field string, fields ...string) Condition {
|
||||||
|
return func(_ context.Context, m ent.Mutation) bool {
|
||||||
|
if _, exists := m.AddedField(field); !exists {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for _, field := range fields {
|
||||||
|
if _, exists := m.AddedField(field); !exists {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasClearedFields is a condition validating `.FieldCleared` on fields.
|
||||||
|
func HasClearedFields(field string, fields ...string) Condition {
|
||||||
|
return func(_ context.Context, m ent.Mutation) bool {
|
||||||
|
if exists := m.FieldCleared(field); !exists {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for _, field := range fields {
|
||||||
|
if exists := m.FieldCleared(field); !exists {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasFields is a condition validating `.Field` on fields.
|
||||||
|
func HasFields(field string, fields ...string) Condition {
|
||||||
|
return func(_ context.Context, m ent.Mutation) bool {
|
||||||
|
if _, exists := m.Field(field); !exists {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for _, field := range fields {
|
||||||
|
if _, exists := m.Field(field); !exists {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If executes the given hook under condition.
|
||||||
|
//
|
||||||
|
// hook.If(ComputeAverage, And(HasFields(...), HasAddedFields(...)))
|
||||||
|
func If(hk ent.Hook, cond Condition) ent.Hook {
|
||||||
|
return func(next ent.Mutator) ent.Mutator {
|
||||||
|
return ent.MutateFunc(func(ctx context.Context, m ent.Mutation) (ent.Value, error) {
|
||||||
|
if cond(ctx, m) {
|
||||||
|
return hk(next).Mutate(ctx, m)
|
||||||
|
}
|
||||||
|
return next.Mutate(ctx, m)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// On executes the given hook only for the given operation.
|
||||||
|
//
|
||||||
|
// hook.On(Log, ent.Delete|ent.Create)
|
||||||
|
func On(hk ent.Hook, op ent.Op) ent.Hook {
|
||||||
|
return If(hk, HasOp(op))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unless skips the given hook only for the given operation.
|
||||||
|
//
|
||||||
|
// hook.Unless(Log, ent.Update|ent.UpdateOne)
|
||||||
|
func Unless(hk ent.Hook, op ent.Op) ent.Hook {
|
||||||
|
return If(hk, Not(HasOp(op)))
|
||||||
|
}
|
||||||
|
|
||||||
|
// FixedError is a hook returning a fixed error.
|
||||||
|
func FixedError(err error) ent.Hook {
|
||||||
|
return func(ent.Mutator) ent.Mutator {
|
||||||
|
return ent.MutateFunc(func(context.Context, ent.Mutation) (ent.Value, error) {
|
||||||
|
return nil, err
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reject returns a hook that rejects all operations that match op.
|
||||||
|
//
|
||||||
|
// func (T) Hooks() []ent.Hook {
|
||||||
|
// return []ent.Hook{
|
||||||
|
// Reject(ent.Delete|ent.Update),
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
func Reject(op ent.Op) ent.Hook {
|
||||||
|
hk := FixedError(fmt.Errorf("%s operation is not allowed", op))
|
||||||
|
return On(hk, op)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Chain acts as a list of hooks and is effectively immutable.
|
||||||
|
// Once created, it will always hold the same set of hooks in the same order.
|
||||||
|
type Chain struct {
|
||||||
|
hooks []ent.Hook
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewChain creates a new chain of hooks.
|
||||||
|
func NewChain(hooks ...ent.Hook) Chain {
|
||||||
|
return Chain{append([]ent.Hook(nil), hooks...)}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hook chains the list of hooks and returns the final hook.
|
||||||
|
func (c Chain) Hook() ent.Hook {
|
||||||
|
return func(mutator ent.Mutator) ent.Mutator {
|
||||||
|
for i := len(c.hooks) - 1; i >= 0; i-- {
|
||||||
|
mutator = c.hooks[i](mutator)
|
||||||
|
}
|
||||||
|
return mutator
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Append extends a chain, adding the specified hook
|
||||||
|
// as the last ones in the mutation flow.
|
||||||
|
func (c Chain) Append(hooks ...ent.Hook) Chain {
|
||||||
|
newHooks := make([]ent.Hook, 0, len(c.hooks)+len(hooks))
|
||||||
|
newHooks = append(newHooks, c.hooks...)
|
||||||
|
newHooks = append(newHooks, hooks...)
|
||||||
|
return Chain{newHooks}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extend extends a chain, adding the specified chain
|
||||||
|
// as the last ones in the mutation flow.
|
||||||
|
func (c Chain) Extend(chain Chain) Chain {
|
||||||
|
return c.Append(chain.hooks...)
|
||||||
|
}
|
173
z2/backend/ent/key.go
Normal file
@ -0,0 +1,173 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"thesis/ent/key"
|
||||||
|
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Key is the model entity for the Key schema.
|
||||||
|
type Key struct {
|
||||||
|
config `json:"-"`
|
||||||
|
// ID of the ent.
|
||||||
|
ID int `json:"id,omitempty"`
|
||||||
|
// PublicKey holds the value of the "publicKey" field.
|
||||||
|
PublicKey string `json:"publicKey,omitempty"`
|
||||||
|
// Owner holds the value of the "Owner" field.
|
||||||
|
Owner string `json:"Owner,omitempty"`
|
||||||
|
// TrustScore holds the value of the "trustScore" field.
|
||||||
|
TrustScore float64 `json:"trustScore,omitempty"`
|
||||||
|
// Edges holds the relations/edges for other nodes in the graph.
|
||||||
|
// The values are being populated by the KeyQuery when eager-loading is set.
|
||||||
|
Edges KeyEdges `json:"edges"`
|
||||||
|
validators_key *int
|
||||||
|
white_list_account *int
|
||||||
|
selectValues sql.SelectValues
|
||||||
|
}
|
||||||
|
|
||||||
|
// KeyEdges holds the relations/edges for other nodes in the graph.
|
||||||
|
type KeyEdges struct {
|
||||||
|
// Signed holds the value of the Signed edge.
|
||||||
|
Signed []*Transactions `json:"Signed,omitempty"`
|
||||||
|
// loadedTypes holds the information for reporting if a
|
||||||
|
// type was loaded (or requested) in eager-loading or not.
|
||||||
|
loadedTypes [1]bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignedOrErr returns the Signed value or an error if the edge
|
||||||
|
// was not loaded in eager-loading.
|
||||||
|
func (e KeyEdges) SignedOrErr() ([]*Transactions, error) {
|
||||||
|
if e.loadedTypes[0] {
|
||||||
|
return e.Signed, nil
|
||||||
|
}
|
||||||
|
return nil, &NotLoadedError{edge: "Signed"}
|
||||||
|
}
|
||||||
|
|
||||||
|
// scanValues returns the types for scanning values from sql.Rows.
|
||||||
|
func (*Key) scanValues(columns []string) ([]any, error) {
|
||||||
|
values := make([]any, len(columns))
|
||||||
|
for i := range columns {
|
||||||
|
switch columns[i] {
|
||||||
|
case key.FieldTrustScore:
|
||||||
|
values[i] = new(sql.NullFloat64)
|
||||||
|
case key.FieldID:
|
||||||
|
values[i] = new(sql.NullInt64)
|
||||||
|
case key.FieldPublicKey, key.FieldOwner:
|
||||||
|
values[i] = new(sql.NullString)
|
||||||
|
case key.ForeignKeys[0]: // validators_key
|
||||||
|
values[i] = new(sql.NullInt64)
|
||||||
|
case key.ForeignKeys[1]: // white_list_account
|
||||||
|
values[i] = new(sql.NullInt64)
|
||||||
|
default:
|
||||||
|
values[i] = new(sql.UnknownType)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return values, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// assignValues assigns the values that were returned from sql.Rows (after scanning)
|
||||||
|
// to the Key fields.
|
||||||
|
func (k *Key) assignValues(columns []string, values []any) error {
|
||||||
|
if m, n := len(values), len(columns); m < n {
|
||||||
|
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
|
||||||
|
}
|
||||||
|
for i := range columns {
|
||||||
|
switch columns[i] {
|
||||||
|
case key.FieldID:
|
||||||
|
value, ok := values[i].(*sql.NullInt64)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field id", value)
|
||||||
|
}
|
||||||
|
k.ID = int(value.Int64)
|
||||||
|
case key.FieldPublicKey:
|
||||||
|
if value, ok := values[i].(*sql.NullString); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field publicKey", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
k.PublicKey = value.String
|
||||||
|
}
|
||||||
|
case key.FieldOwner:
|
||||||
|
if value, ok := values[i].(*sql.NullString); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field Owner", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
k.Owner = value.String
|
||||||
|
}
|
||||||
|
case key.FieldTrustScore:
|
||||||
|
if value, ok := values[i].(*sql.NullFloat64); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field trustScore", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
k.TrustScore = value.Float64
|
||||||
|
}
|
||||||
|
case key.ForeignKeys[0]:
|
||||||
|
if value, ok := values[i].(*sql.NullInt64); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for edge-field validators_key", value)
|
||||||
|
} else if value.Valid {
|
||||||
|
k.validators_key = new(int)
|
||||||
|
*k.validators_key = int(value.Int64)
|
||||||
|
}
|
||||||
|
case key.ForeignKeys[1]:
|
||||||
|
if value, ok := values[i].(*sql.NullInt64); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for edge-field white_list_account", value)
|
||||||
|
} else if value.Valid {
|
||||||
|
k.white_list_account = new(int)
|
||||||
|
*k.white_list_account = int(value.Int64)
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
k.selectValues.Set(columns[i], values[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Value returns the ent.Value that was dynamically selected and assigned to the Key.
|
||||||
|
// This includes values selected through modifiers, order, etc.
|
||||||
|
func (k *Key) Value(name string) (ent.Value, error) {
|
||||||
|
return k.selectValues.Get(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// QuerySigned queries the "Signed" edge of the Key entity.
|
||||||
|
func (k *Key) QuerySigned() *TransactionsQuery {
|
||||||
|
return NewKeyClient(k.config).QuerySigned(k)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update returns a builder for updating this Key.
|
||||||
|
// Note that you need to call Key.Unwrap() before calling this method if this Key
|
||||||
|
// was returned from a transaction, and the transaction was committed or rolled back.
|
||||||
|
func (k *Key) Update() *KeyUpdateOne {
|
||||||
|
return NewKeyClient(k.config).UpdateOne(k)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unwrap unwraps the Key entity that was returned from a transaction after it was closed,
|
||||||
|
// so that all future queries will be executed through the driver which created the transaction.
|
||||||
|
func (k *Key) Unwrap() *Key {
|
||||||
|
_tx, ok := k.config.driver.(*txDriver)
|
||||||
|
if !ok {
|
||||||
|
panic("ent: Key is not a transactional entity")
|
||||||
|
}
|
||||||
|
k.config.driver = _tx.drv
|
||||||
|
return k
|
||||||
|
}
|
||||||
|
|
||||||
|
// String implements the fmt.Stringer.
|
||||||
|
func (k *Key) String() string {
|
||||||
|
var builder strings.Builder
|
||||||
|
builder.WriteString("Key(")
|
||||||
|
builder.WriteString(fmt.Sprintf("id=%v, ", k.ID))
|
||||||
|
builder.WriteString("publicKey=")
|
||||||
|
builder.WriteString(k.PublicKey)
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("Owner=")
|
||||||
|
builder.WriteString(k.Owner)
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("trustScore=")
|
||||||
|
builder.WriteString(fmt.Sprintf("%v", k.TrustScore))
|
||||||
|
builder.WriteByte(')')
|
||||||
|
return builder.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Keys is a parsable slice of Key.
|
||||||
|
type Keys []*Key
|
119
z2/backend/ent/key/key.go
Normal file
@ -0,0 +1,119 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package key
|
||||||
|
|
||||||
|
import (
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// Label holds the string label denoting the key type in the database.
|
||||||
|
Label = "key"
|
||||||
|
// FieldID holds the string denoting the id field in the database.
|
||||||
|
FieldID = "id"
|
||||||
|
// FieldPublicKey holds the string denoting the publickey field in the database.
|
||||||
|
FieldPublicKey = "public_key"
|
||||||
|
// FieldOwner holds the string denoting the owner field in the database.
|
||||||
|
FieldOwner = "owner"
|
||||||
|
// FieldTrustScore holds the string denoting the trustscore field in the database.
|
||||||
|
FieldTrustScore = "trust_score"
|
||||||
|
// EdgeSigned holds the string denoting the signed edge name in mutations.
|
||||||
|
EdgeSigned = "Signed"
|
||||||
|
// Table holds the table name of the key in the database.
|
||||||
|
Table = "keys"
|
||||||
|
// SignedTable is the table that holds the Signed relation/edge. The primary key declared below.
|
||||||
|
SignedTable = "key_Signed"
|
||||||
|
// SignedInverseTable is the table name for the Transactions entity.
|
||||||
|
// It exists in this package in order to avoid circular dependency with the "transactions" package.
|
||||||
|
SignedInverseTable = "transactions"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Columns holds all SQL columns for key fields.
|
||||||
|
var Columns = []string{
|
||||||
|
FieldID,
|
||||||
|
FieldPublicKey,
|
||||||
|
FieldOwner,
|
||||||
|
FieldTrustScore,
|
||||||
|
}
|
||||||
|
|
||||||
|
// ForeignKeys holds the SQL foreign-keys that are owned by the "keys"
|
||||||
|
// table and are not defined as standalone fields in the schema.
|
||||||
|
var ForeignKeys = []string{
|
||||||
|
"validators_key",
|
||||||
|
"white_list_account",
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
// SignedPrimaryKey and SignedColumn2 are the table columns denoting the
|
||||||
|
// primary key for the Signed relation (M2M).
|
||||||
|
SignedPrimaryKey = []string{"key_id", "transactions_id"}
|
||||||
|
)
|
||||||
|
|
||||||
|
// ValidColumn reports if the column name is valid (part of the table columns).
|
||||||
|
func ValidColumn(column string) bool {
|
||||||
|
for i := range Columns {
|
||||||
|
if column == Columns[i] {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for i := range ForeignKeys {
|
||||||
|
if column == ForeignKeys[i] {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
// PublicKeyValidator is a validator for the "publicKey" field. It is called by the builders before save.
|
||||||
|
PublicKeyValidator func(string) error
|
||||||
|
// OwnerValidator is a validator for the "Owner" field. It is called by the builders before save.
|
||||||
|
OwnerValidator func(string) error
|
||||||
|
// DefaultTrustScore holds the default value on creation for the "trustScore" field.
|
||||||
|
DefaultTrustScore float64
|
||||||
|
)
|
||||||
|
|
||||||
|
// OrderOption defines the ordering options for the Key queries.
|
||||||
|
type OrderOption func(*sql.Selector)
|
||||||
|
|
||||||
|
// ByID orders the results by the id field.
|
||||||
|
func ByID(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldID, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByPublicKey orders the results by the publicKey field.
|
||||||
|
func ByPublicKey(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldPublicKey, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByOwner orders the results by the Owner field.
|
||||||
|
func ByOwner(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldOwner, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByTrustScore orders the results by the trustScore field.
|
||||||
|
func ByTrustScore(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldTrustScore, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// BySignedCount orders the results by Signed count.
|
||||||
|
func BySignedCount(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborsCount(s, newSignedStep(), opts...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// BySigned orders the results by Signed terms.
|
||||||
|
func BySigned(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborTerms(s, newSignedStep(), append([]sql.OrderTerm{term}, terms...)...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
func newSignedStep() *sqlgraph.Step {
|
||||||
|
return sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.To(SignedInverseTable, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2M, false, SignedTable, SignedPrimaryKey...),
|
||||||
|
)
|
||||||
|
}
|
278
z2/backend/ent/key/where.go
Normal file
@ -0,0 +1,278 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package key
|
||||||
|
|
||||||
|
import (
|
||||||
|
"thesis/ent/predicate"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ID filters vertices based on their ID field.
|
||||||
|
func ID(id int) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDEQ applies the EQ predicate on the ID field.
|
||||||
|
func IDEQ(id int) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDNEQ applies the NEQ predicate on the ID field.
|
||||||
|
func IDNEQ(id int) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldNEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDIn applies the In predicate on the ID field.
|
||||||
|
func IDIn(ids ...int) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldIn(FieldID, ids...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDNotIn applies the NotIn predicate on the ID field.
|
||||||
|
func IDNotIn(ids ...int) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldNotIn(FieldID, ids...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDGT applies the GT predicate on the ID field.
|
||||||
|
func IDGT(id int) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldGT(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDGTE applies the GTE predicate on the ID field.
|
||||||
|
func IDGTE(id int) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldGTE(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDLT applies the LT predicate on the ID field.
|
||||||
|
func IDLT(id int) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldLT(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDLTE applies the LTE predicate on the ID field.
|
||||||
|
func IDLTE(id int) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldLTE(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKey applies equality check predicate on the "publicKey" field. It's identical to PublicKeyEQ.
|
||||||
|
func PublicKey(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldEQ(FieldPublicKey, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Owner applies equality check predicate on the "Owner" field. It's identical to OwnerEQ.
|
||||||
|
func Owner(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldEQ(FieldOwner, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TrustScore applies equality check predicate on the "trustScore" field. It's identical to TrustScoreEQ.
|
||||||
|
func TrustScore(v float64) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldEQ(FieldTrustScore, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyEQ applies the EQ predicate on the "publicKey" field.
|
||||||
|
func PublicKeyEQ(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldEQ(FieldPublicKey, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyNEQ applies the NEQ predicate on the "publicKey" field.
|
||||||
|
func PublicKeyNEQ(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldNEQ(FieldPublicKey, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyIn applies the In predicate on the "publicKey" field.
|
||||||
|
func PublicKeyIn(vs ...string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldIn(FieldPublicKey, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyNotIn applies the NotIn predicate on the "publicKey" field.
|
||||||
|
func PublicKeyNotIn(vs ...string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldNotIn(FieldPublicKey, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyGT applies the GT predicate on the "publicKey" field.
|
||||||
|
func PublicKeyGT(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldGT(FieldPublicKey, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyGTE applies the GTE predicate on the "publicKey" field.
|
||||||
|
func PublicKeyGTE(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldGTE(FieldPublicKey, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyLT applies the LT predicate on the "publicKey" field.
|
||||||
|
func PublicKeyLT(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldLT(FieldPublicKey, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyLTE applies the LTE predicate on the "publicKey" field.
|
||||||
|
func PublicKeyLTE(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldLTE(FieldPublicKey, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyContains applies the Contains predicate on the "publicKey" field.
|
||||||
|
func PublicKeyContains(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldContains(FieldPublicKey, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyHasPrefix applies the HasPrefix predicate on the "publicKey" field.
|
||||||
|
func PublicKeyHasPrefix(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldHasPrefix(FieldPublicKey, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyHasSuffix applies the HasSuffix predicate on the "publicKey" field.
|
||||||
|
func PublicKeyHasSuffix(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldHasSuffix(FieldPublicKey, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyEqualFold applies the EqualFold predicate on the "publicKey" field.
|
||||||
|
func PublicKeyEqualFold(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldEqualFold(FieldPublicKey, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// PublicKeyContainsFold applies the ContainsFold predicate on the "publicKey" field.
|
||||||
|
func PublicKeyContainsFold(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldContainsFold(FieldPublicKey, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerEQ applies the EQ predicate on the "Owner" field.
|
||||||
|
func OwnerEQ(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldEQ(FieldOwner, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerNEQ applies the NEQ predicate on the "Owner" field.
|
||||||
|
func OwnerNEQ(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldNEQ(FieldOwner, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerIn applies the In predicate on the "Owner" field.
|
||||||
|
func OwnerIn(vs ...string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldIn(FieldOwner, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerNotIn applies the NotIn predicate on the "Owner" field.
|
||||||
|
func OwnerNotIn(vs ...string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldNotIn(FieldOwner, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerGT applies the GT predicate on the "Owner" field.
|
||||||
|
func OwnerGT(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldGT(FieldOwner, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerGTE applies the GTE predicate on the "Owner" field.
|
||||||
|
func OwnerGTE(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldGTE(FieldOwner, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerLT applies the LT predicate on the "Owner" field.
|
||||||
|
func OwnerLT(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldLT(FieldOwner, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerLTE applies the LTE predicate on the "Owner" field.
|
||||||
|
func OwnerLTE(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldLTE(FieldOwner, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerContains applies the Contains predicate on the "Owner" field.
|
||||||
|
func OwnerContains(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldContains(FieldOwner, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerHasPrefix applies the HasPrefix predicate on the "Owner" field.
|
||||||
|
func OwnerHasPrefix(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldHasPrefix(FieldOwner, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerHasSuffix applies the HasSuffix predicate on the "Owner" field.
|
||||||
|
func OwnerHasSuffix(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldHasSuffix(FieldOwner, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerEqualFold applies the EqualFold predicate on the "Owner" field.
|
||||||
|
func OwnerEqualFold(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldEqualFold(FieldOwner, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// OwnerContainsFold applies the ContainsFold predicate on the "Owner" field.
|
||||||
|
func OwnerContainsFold(v string) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldContainsFold(FieldOwner, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TrustScoreEQ applies the EQ predicate on the "trustScore" field.
|
||||||
|
func TrustScoreEQ(v float64) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldEQ(FieldTrustScore, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TrustScoreNEQ applies the NEQ predicate on the "trustScore" field.
|
||||||
|
func TrustScoreNEQ(v float64) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldNEQ(FieldTrustScore, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TrustScoreIn applies the In predicate on the "trustScore" field.
|
||||||
|
func TrustScoreIn(vs ...float64) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldIn(FieldTrustScore, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TrustScoreNotIn applies the NotIn predicate on the "trustScore" field.
|
||||||
|
func TrustScoreNotIn(vs ...float64) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldNotIn(FieldTrustScore, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TrustScoreGT applies the GT predicate on the "trustScore" field.
|
||||||
|
func TrustScoreGT(v float64) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldGT(FieldTrustScore, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TrustScoreGTE applies the GTE predicate on the "trustScore" field.
|
||||||
|
func TrustScoreGTE(v float64) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldGTE(FieldTrustScore, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TrustScoreLT applies the LT predicate on the "trustScore" field.
|
||||||
|
func TrustScoreLT(v float64) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldLT(FieldTrustScore, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TrustScoreLTE applies the LTE predicate on the "trustScore" field.
|
||||||
|
func TrustScoreLTE(v float64) predicate.Key {
|
||||||
|
return predicate.Key(sql.FieldLTE(FieldTrustScore, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasSigned applies the HasEdge predicate on the "Signed" edge.
|
||||||
|
func HasSigned() predicate.Key {
|
||||||
|
return predicate.Key(func(s *sql.Selector) {
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2M, false, SignedTable, SignedPrimaryKey...),
|
||||||
|
)
|
||||||
|
sqlgraph.HasNeighbors(s, step)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasSignedWith applies the HasEdge predicate on the "Signed" edge with a given conditions (other predicates).
|
||||||
|
func HasSignedWith(preds ...predicate.Transactions) predicate.Key {
|
||||||
|
return predicate.Key(func(s *sql.Selector) {
|
||||||
|
step := newSignedStep()
|
||||||
|
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
|
||||||
|
for _, p := range preds {
|
||||||
|
p(s)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// And groups predicates with the AND operator between them.
|
||||||
|
func And(predicates ...predicate.Key) predicate.Key {
|
||||||
|
return predicate.Key(sql.AndPredicates(predicates...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Or groups predicates with the OR operator between them.
|
||||||
|
func Or(predicates ...predicate.Key) predicate.Key {
|
||||||
|
return predicate.Key(sql.OrPredicates(predicates...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Not applies the not operator on the given predicate.
|
||||||
|
func Not(p predicate.Key) predicate.Key {
|
||||||
|
return predicate.Key(sql.NotPredicates(p))
|
||||||
|
}
|
269
z2/backend/ent/key_create.go
Normal file
@ -0,0 +1,269 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"thesis/ent/key"
|
||||||
|
"thesis/ent/transactions"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// KeyCreate is the builder for creating a Key entity.
|
||||||
|
type KeyCreate struct {
|
||||||
|
config
|
||||||
|
mutation *KeyMutation
|
||||||
|
hooks []Hook
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetPublicKey sets the "publicKey" field.
|
||||||
|
func (kc *KeyCreate) SetPublicKey(s string) *KeyCreate {
|
||||||
|
kc.mutation.SetPublicKey(s)
|
||||||
|
return kc
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetOwner sets the "Owner" field.
|
||||||
|
func (kc *KeyCreate) SetOwner(s string) *KeyCreate {
|
||||||
|
kc.mutation.SetOwner(s)
|
||||||
|
return kc
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetTrustScore sets the "trustScore" field.
|
||||||
|
func (kc *KeyCreate) SetTrustScore(f float64) *KeyCreate {
|
||||||
|
kc.mutation.SetTrustScore(f)
|
||||||
|
return kc
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableTrustScore sets the "trustScore" field if the given value is not nil.
|
||||||
|
func (kc *KeyCreate) SetNillableTrustScore(f *float64) *KeyCreate {
|
||||||
|
if f != nil {
|
||||||
|
kc.SetTrustScore(*f)
|
||||||
|
}
|
||||||
|
return kc
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddSignedIDs adds the "Signed" edge to the Transactions entity by IDs.
|
||||||
|
func (kc *KeyCreate) AddSignedIDs(ids ...int) *KeyCreate {
|
||||||
|
kc.mutation.AddSignedIDs(ids...)
|
||||||
|
return kc
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddSigned adds the "Signed" edges to the Transactions entity.
|
||||||
|
func (kc *KeyCreate) AddSigned(t ...*Transactions) *KeyCreate {
|
||||||
|
ids := make([]int, len(t))
|
||||||
|
for i := range t {
|
||||||
|
ids[i] = t[i].ID
|
||||||
|
}
|
||||||
|
return kc.AddSignedIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mutation returns the KeyMutation object of the builder.
|
||||||
|
func (kc *KeyCreate) Mutation() *KeyMutation {
|
||||||
|
return kc.mutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save creates the Key in the database.
|
||||||
|
func (kc *KeyCreate) Save(ctx context.Context) (*Key, error) {
|
||||||
|
kc.defaults()
|
||||||
|
return withHooks(ctx, kc.sqlSave, kc.mutation, kc.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX calls Save and panics if Save returns an error.
|
||||||
|
func (kc *KeyCreate) SaveX(ctx context.Context) *Key {
|
||||||
|
v, err := kc.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (kc *KeyCreate) Exec(ctx context.Context) error {
|
||||||
|
_, err := kc.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (kc *KeyCreate) ExecX(ctx context.Context) {
|
||||||
|
if err := kc.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// defaults sets the default values of the builder before save.
|
||||||
|
func (kc *KeyCreate) defaults() {
|
||||||
|
if _, ok := kc.mutation.TrustScore(); !ok {
|
||||||
|
v := key.DefaultTrustScore
|
||||||
|
kc.mutation.SetTrustScore(v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// check runs all checks and user-defined validators on the builder.
|
||||||
|
func (kc *KeyCreate) check() error {
|
||||||
|
if _, ok := kc.mutation.PublicKey(); !ok {
|
||||||
|
return &ValidationError{Name: "publicKey", err: errors.New(`ent: missing required field "Key.publicKey"`)}
|
||||||
|
}
|
||||||
|
if v, ok := kc.mutation.PublicKey(); ok {
|
||||||
|
if err := key.PublicKeyValidator(v); err != nil {
|
||||||
|
return &ValidationError{Name: "publicKey", err: fmt.Errorf(`ent: validator failed for field "Key.publicKey": %w`, err)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if _, ok := kc.mutation.Owner(); !ok {
|
||||||
|
return &ValidationError{Name: "Owner", err: errors.New(`ent: missing required field "Key.Owner"`)}
|
||||||
|
}
|
||||||
|
if v, ok := kc.mutation.Owner(); ok {
|
||||||
|
if err := key.OwnerValidator(v); err != nil {
|
||||||
|
return &ValidationError{Name: "Owner", err: fmt.Errorf(`ent: validator failed for field "Key.Owner": %w`, err)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if _, ok := kc.mutation.TrustScore(); !ok {
|
||||||
|
return &ValidationError{Name: "trustScore", err: errors.New(`ent: missing required field "Key.trustScore"`)}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (kc *KeyCreate) sqlSave(ctx context.Context) (*Key, error) {
|
||||||
|
if err := kc.check(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
_node, _spec := kc.createSpec()
|
||||||
|
if err := sqlgraph.CreateNode(ctx, kc.driver, _spec); err != nil {
|
||||||
|
if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
id := _spec.ID.Value.(int64)
|
||||||
|
_node.ID = int(id)
|
||||||
|
kc.mutation.id = &_node.ID
|
||||||
|
kc.mutation.done = true
|
||||||
|
return _node, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (kc *KeyCreate) createSpec() (*Key, *sqlgraph.CreateSpec) {
|
||||||
|
var (
|
||||||
|
_node = &Key{config: kc.config}
|
||||||
|
_spec = sqlgraph.NewCreateSpec(key.Table, sqlgraph.NewFieldSpec(key.FieldID, field.TypeInt))
|
||||||
|
)
|
||||||
|
if value, ok := kc.mutation.PublicKey(); ok {
|
||||||
|
_spec.SetField(key.FieldPublicKey, field.TypeString, value)
|
||||||
|
_node.PublicKey = value
|
||||||
|
}
|
||||||
|
if value, ok := kc.mutation.Owner(); ok {
|
||||||
|
_spec.SetField(key.FieldOwner, field.TypeString, value)
|
||||||
|
_node.Owner = value
|
||||||
|
}
|
||||||
|
if value, ok := kc.mutation.TrustScore(); ok {
|
||||||
|
_spec.SetField(key.FieldTrustScore, field.TypeFloat64, value)
|
||||||
|
_node.TrustScore = value
|
||||||
|
}
|
||||||
|
if nodes := kc.mutation.SignedIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: key.SignedTable,
|
||||||
|
Columns: key.SignedPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges = append(_spec.Edges, edge)
|
||||||
|
}
|
||||||
|
return _node, _spec
|
||||||
|
}
|
||||||
|
|
||||||
|
// KeyCreateBulk is the builder for creating many Key entities in bulk.
|
||||||
|
type KeyCreateBulk struct {
|
||||||
|
config
|
||||||
|
err error
|
||||||
|
builders []*KeyCreate
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save creates the Key entities in the database.
|
||||||
|
func (kcb *KeyCreateBulk) Save(ctx context.Context) ([]*Key, error) {
|
||||||
|
if kcb.err != nil {
|
||||||
|
return nil, kcb.err
|
||||||
|
}
|
||||||
|
specs := make([]*sqlgraph.CreateSpec, len(kcb.builders))
|
||||||
|
nodes := make([]*Key, len(kcb.builders))
|
||||||
|
mutators := make([]Mutator, len(kcb.builders))
|
||||||
|
for i := range kcb.builders {
|
||||||
|
func(i int, root context.Context) {
|
||||||
|
builder := kcb.builders[i]
|
||||||
|
builder.defaults()
|
||||||
|
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
|
||||||
|
mutation, ok := m.(*KeyMutation)
|
||||||
|
if !ok {
|
||||||
|
return nil, fmt.Errorf("unexpected mutation type %T", m)
|
||||||
|
}
|
||||||
|
if err := builder.check(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
builder.mutation = mutation
|
||||||
|
var err error
|
||||||
|
nodes[i], specs[i] = builder.createSpec()
|
||||||
|
if i < len(mutators)-1 {
|
||||||
|
_, err = mutators[i+1].Mutate(root, kcb.builders[i+1].mutation)
|
||||||
|
} else {
|
||||||
|
spec := &sqlgraph.BatchCreateSpec{Nodes: specs}
|
||||||
|
// Invoke the actual operation on the latest mutation in the chain.
|
||||||
|
if err = sqlgraph.BatchCreate(ctx, kcb.driver, spec); err != nil {
|
||||||
|
if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
mutation.id = &nodes[i].ID
|
||||||
|
if specs[i].ID.Value != nil {
|
||||||
|
id := specs[i].ID.Value.(int64)
|
||||||
|
nodes[i].ID = int(id)
|
||||||
|
}
|
||||||
|
mutation.done = true
|
||||||
|
return nodes[i], nil
|
||||||
|
})
|
||||||
|
for i := len(builder.hooks) - 1; i >= 0; i-- {
|
||||||
|
mut = builder.hooks[i](mut)
|
||||||
|
}
|
||||||
|
mutators[i] = mut
|
||||||
|
}(i, ctx)
|
||||||
|
}
|
||||||
|
if len(mutators) > 0 {
|
||||||
|
if _, err := mutators[0].Mutate(ctx, kcb.builders[0].mutation); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nodes, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX is like Save, but panics if an error occurs.
|
||||||
|
func (kcb *KeyCreateBulk) SaveX(ctx context.Context) []*Key {
|
||||||
|
v, err := kcb.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (kcb *KeyCreateBulk) Exec(ctx context.Context) error {
|
||||||
|
_, err := kcb.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (kcb *KeyCreateBulk) ExecX(ctx context.Context) {
|
||||||
|
if err := kcb.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
88
z2/backend/ent/key_delete.go
Normal file
@ -0,0 +1,88 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"thesis/ent/key"
|
||||||
|
"thesis/ent/predicate"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// KeyDelete is the builder for deleting a Key entity.
|
||||||
|
type KeyDelete struct {
|
||||||
|
config
|
||||||
|
hooks []Hook
|
||||||
|
mutation *KeyMutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the KeyDelete builder.
|
||||||
|
func (kd *KeyDelete) Where(ps ...predicate.Key) *KeyDelete {
|
||||||
|
kd.mutation.Where(ps...)
|
||||||
|
return kd
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the deletion query and returns how many vertices were deleted.
|
||||||
|
func (kd *KeyDelete) Exec(ctx context.Context) (int, error) {
|
||||||
|
return withHooks(ctx, kd.sqlExec, kd.mutation, kd.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (kd *KeyDelete) ExecX(ctx context.Context) int {
|
||||||
|
n, err := kd.Exec(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func (kd *KeyDelete) sqlExec(ctx context.Context) (int, error) {
|
||||||
|
_spec := sqlgraph.NewDeleteSpec(key.Table, sqlgraph.NewFieldSpec(key.FieldID, field.TypeInt))
|
||||||
|
if ps := kd.mutation.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
affected, err := sqlgraph.DeleteNodes(ctx, kd.driver, _spec)
|
||||||
|
if err != nil && sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
kd.mutation.done = true
|
||||||
|
return affected, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// KeyDeleteOne is the builder for deleting a single Key entity.
|
||||||
|
type KeyDeleteOne struct {
|
||||||
|
kd *KeyDelete
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the KeyDelete builder.
|
||||||
|
func (kdo *KeyDeleteOne) Where(ps ...predicate.Key) *KeyDeleteOne {
|
||||||
|
kdo.kd.mutation.Where(ps...)
|
||||||
|
return kdo
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the deletion query.
|
||||||
|
func (kdo *KeyDeleteOne) Exec(ctx context.Context) error {
|
||||||
|
n, err := kdo.kd.Exec(ctx)
|
||||||
|
switch {
|
||||||
|
case err != nil:
|
||||||
|
return err
|
||||||
|
case n == 0:
|
||||||
|
return &NotFoundError{key.Label}
|
||||||
|
default:
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (kdo *KeyDeleteOne) ExecX(ctx context.Context) {
|
||||||
|
if err := kdo.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
641
z2/backend/ent/key_query.go
Normal file
@ -0,0 +1,641 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"database/sql/driver"
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
"thesis/ent/key"
|
||||||
|
"thesis/ent/predicate"
|
||||||
|
"thesis/ent/transactions"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// KeyQuery is the builder for querying Key entities.
|
||||||
|
type KeyQuery struct {
|
||||||
|
config
|
||||||
|
ctx *QueryContext
|
||||||
|
order []key.OrderOption
|
||||||
|
inters []Interceptor
|
||||||
|
predicates []predicate.Key
|
||||||
|
withSigned *TransactionsQuery
|
||||||
|
withFKs bool
|
||||||
|
// intermediate query (i.e. traversal path).
|
||||||
|
sql *sql.Selector
|
||||||
|
path func(context.Context) (*sql.Selector, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where adds a new predicate for the KeyQuery builder.
|
||||||
|
func (kq *KeyQuery) Where(ps ...predicate.Key) *KeyQuery {
|
||||||
|
kq.predicates = append(kq.predicates, ps...)
|
||||||
|
return kq
|
||||||
|
}
|
||||||
|
|
||||||
|
// Limit the number of records to be returned by this query.
|
||||||
|
func (kq *KeyQuery) Limit(limit int) *KeyQuery {
|
||||||
|
kq.ctx.Limit = &limit
|
||||||
|
return kq
|
||||||
|
}
|
||||||
|
|
||||||
|
// Offset to start from.
|
||||||
|
func (kq *KeyQuery) Offset(offset int) *KeyQuery {
|
||||||
|
kq.ctx.Offset = &offset
|
||||||
|
return kq
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unique configures the query builder to filter duplicate records on query.
|
||||||
|
// By default, unique is set to true, and can be disabled using this method.
|
||||||
|
func (kq *KeyQuery) Unique(unique bool) *KeyQuery {
|
||||||
|
kq.ctx.Unique = &unique
|
||||||
|
return kq
|
||||||
|
}
|
||||||
|
|
||||||
|
// Order specifies how the records should be ordered.
|
||||||
|
func (kq *KeyQuery) Order(o ...key.OrderOption) *KeyQuery {
|
||||||
|
kq.order = append(kq.order, o...)
|
||||||
|
return kq
|
||||||
|
}
|
||||||
|
|
||||||
|
// QuerySigned chains the current query on the "Signed" edge.
|
||||||
|
func (kq *KeyQuery) QuerySigned() *TransactionsQuery {
|
||||||
|
query := (&TransactionsClient{config: kq.config}).Query()
|
||||||
|
query.path = func(ctx context.Context) (fromU *sql.Selector, err error) {
|
||||||
|
if err := kq.prepareQuery(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
selector := kq.sqlQuery(ctx)
|
||||||
|
if err := selector.Err(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(key.Table, key.FieldID, selector),
|
||||||
|
sqlgraph.To(transactions.Table, transactions.FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2M, false, key.SignedTable, key.SignedPrimaryKey...),
|
||||||
|
)
|
||||||
|
fromU = sqlgraph.SetNeighbors(kq.driver.Dialect(), step)
|
||||||
|
return fromU, nil
|
||||||
|
}
|
||||||
|
return query
|
||||||
|
}
|
||||||
|
|
||||||
|
// First returns the first Key entity from the query.
|
||||||
|
// Returns a *NotFoundError when no Key was found.
|
||||||
|
func (kq *KeyQuery) First(ctx context.Context) (*Key, error) {
|
||||||
|
nodes, err := kq.Limit(1).All(setContextOp(ctx, kq.ctx, "First"))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if len(nodes) == 0 {
|
||||||
|
return nil, &NotFoundError{key.Label}
|
||||||
|
}
|
||||||
|
return nodes[0], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FirstX is like First, but panics if an error occurs.
|
||||||
|
func (kq *KeyQuery) FirstX(ctx context.Context) *Key {
|
||||||
|
node, err := kq.First(ctx)
|
||||||
|
if err != nil && !IsNotFound(err) {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
// FirstID returns the first Key ID from the query.
|
||||||
|
// Returns a *NotFoundError when no Key ID was found.
|
||||||
|
func (kq *KeyQuery) FirstID(ctx context.Context) (id int, err error) {
|
||||||
|
var ids []int
|
||||||
|
if ids, err = kq.Limit(1).IDs(setContextOp(ctx, kq.ctx, "FirstID")); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if len(ids) == 0 {
|
||||||
|
err = &NotFoundError{key.Label}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
return ids[0], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// FirstIDX is like FirstID, but panics if an error occurs.
|
||||||
|
func (kq *KeyQuery) FirstIDX(ctx context.Context) int {
|
||||||
|
id, err := kq.FirstID(ctx)
|
||||||
|
if err != nil && !IsNotFound(err) {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return id
|
||||||
|
}
|
||||||
|
|
||||||
|
// Only returns a single Key entity found by the query, ensuring it only returns one.
|
||||||
|
// Returns a *NotSingularError when more than one Key entity is found.
|
||||||
|
// Returns a *NotFoundError when no Key entities are found.
|
||||||
|
func (kq *KeyQuery) Only(ctx context.Context) (*Key, error) {
|
||||||
|
nodes, err := kq.Limit(2).All(setContextOp(ctx, kq.ctx, "Only"))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
switch len(nodes) {
|
||||||
|
case 1:
|
||||||
|
return nodes[0], nil
|
||||||
|
case 0:
|
||||||
|
return nil, &NotFoundError{key.Label}
|
||||||
|
default:
|
||||||
|
return nil, &NotSingularError{key.Label}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnlyX is like Only, but panics if an error occurs.
|
||||||
|
func (kq *KeyQuery) OnlyX(ctx context.Context) *Key {
|
||||||
|
node, err := kq.Only(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnlyID is like Only, but returns the only Key ID in the query.
|
||||||
|
// Returns a *NotSingularError when more than one Key ID is found.
|
||||||
|
// Returns a *NotFoundError when no entities are found.
|
||||||
|
func (kq *KeyQuery) OnlyID(ctx context.Context) (id int, err error) {
|
||||||
|
var ids []int
|
||||||
|
if ids, err = kq.Limit(2).IDs(setContextOp(ctx, kq.ctx, "OnlyID")); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
switch len(ids) {
|
||||||
|
case 1:
|
||||||
|
id = ids[0]
|
||||||
|
case 0:
|
||||||
|
err = &NotFoundError{key.Label}
|
||||||
|
default:
|
||||||
|
err = &NotSingularError{key.Label}
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// OnlyIDX is like OnlyID, but panics if an error occurs.
|
||||||
|
func (kq *KeyQuery) OnlyIDX(ctx context.Context) int {
|
||||||
|
id, err := kq.OnlyID(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return id
|
||||||
|
}
|
||||||
|
|
||||||
|
// All executes the query and returns a list of Keys.
|
||||||
|
func (kq *KeyQuery) All(ctx context.Context) ([]*Key, error) {
|
||||||
|
ctx = setContextOp(ctx, kq.ctx, "All")
|
||||||
|
if err := kq.prepareQuery(ctx); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
qr := querierAll[[]*Key, *KeyQuery]()
|
||||||
|
return withInterceptors[[]*Key](ctx, kq, qr, kq.inters)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AllX is like All, but panics if an error occurs.
|
||||||
|
func (kq *KeyQuery) AllX(ctx context.Context) []*Key {
|
||||||
|
nodes, err := kq.All(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return nodes
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDs executes the query and returns a list of Key IDs.
|
||||||
|
func (kq *KeyQuery) IDs(ctx context.Context) (ids []int, err error) {
|
||||||
|
if kq.ctx.Unique == nil && kq.path != nil {
|
||||||
|
kq.Unique(true)
|
||||||
|
}
|
||||||
|
ctx = setContextOp(ctx, kq.ctx, "IDs")
|
||||||
|
if err = kq.Select(key.FieldID).Scan(ctx, &ids); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return ids, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDsX is like IDs, but panics if an error occurs.
|
||||||
|
func (kq *KeyQuery) IDsX(ctx context.Context) []int {
|
||||||
|
ids, err := kq.IDs(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return ids
|
||||||
|
}
|
||||||
|
|
||||||
|
// Count returns the count of the given query.
|
||||||
|
func (kq *KeyQuery) Count(ctx context.Context) (int, error) {
|
||||||
|
ctx = setContextOp(ctx, kq.ctx, "Count")
|
||||||
|
if err := kq.prepareQuery(ctx); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return withInterceptors[int](ctx, kq, querierCount[*KeyQuery](), kq.inters)
|
||||||
|
}
|
||||||
|
|
||||||
|
// CountX is like Count, but panics if an error occurs.
|
||||||
|
func (kq *KeyQuery) CountX(ctx context.Context) int {
|
||||||
|
count, err := kq.Count(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exist returns true if the query has elements in the graph.
|
||||||
|
func (kq *KeyQuery) Exist(ctx context.Context) (bool, error) {
|
||||||
|
ctx = setContextOp(ctx, kq.ctx, "Exist")
|
||||||
|
switch _, err := kq.FirstID(ctx); {
|
||||||
|
case IsNotFound(err):
|
||||||
|
return false, nil
|
||||||
|
case err != nil:
|
||||||
|
return false, fmt.Errorf("ent: check existence: %w", err)
|
||||||
|
default:
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExistX is like Exist, but panics if an error occurs.
|
||||||
|
func (kq *KeyQuery) ExistX(ctx context.Context) bool {
|
||||||
|
exist, err := kq.Exist(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return exist
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clone returns a duplicate of the KeyQuery builder, including all associated steps. It can be
|
||||||
|
// used to prepare common query builders and use them differently after the clone is made.
|
||||||
|
func (kq *KeyQuery) Clone() *KeyQuery {
|
||||||
|
if kq == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &KeyQuery{
|
||||||
|
config: kq.config,
|
||||||
|
ctx: kq.ctx.Clone(),
|
||||||
|
order: append([]key.OrderOption{}, kq.order...),
|
||||||
|
inters: append([]Interceptor{}, kq.inters...),
|
||||||
|
predicates: append([]predicate.Key{}, kq.predicates...),
|
||||||
|
withSigned: kq.withSigned.Clone(),
|
||||||
|
// clone intermediate query.
|
||||||
|
sql: kq.sql.Clone(),
|
||||||
|
path: kq.path,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WithSigned tells the query-builder to eager-load the nodes that are connected to
|
||||||
|
// the "Signed" edge. The optional arguments are used to configure the query builder of the edge.
|
||||||
|
func (kq *KeyQuery) WithSigned(opts ...func(*TransactionsQuery)) *KeyQuery {
|
||||||
|
query := (&TransactionsClient{config: kq.config}).Query()
|
||||||
|
for _, opt := range opts {
|
||||||
|
opt(query)
|
||||||
|
}
|
||||||
|
kq.withSigned = query
|
||||||
|
return kq
|
||||||
|
}
|
||||||
|
|
||||||
|
// GroupBy is used to group vertices by one or more fields/columns.
|
||||||
|
// It is often used with aggregate functions, like: count, max, mean, min, sum.
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
//
|
||||||
|
// var v []struct {
|
||||||
|
// PublicKey string `json:"publicKey,omitempty"`
|
||||||
|
// Count int `json:"count,omitempty"`
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// client.Key.Query().
|
||||||
|
// GroupBy(key.FieldPublicKey).
|
||||||
|
// Aggregate(ent.Count()).
|
||||||
|
// Scan(ctx, &v)
|
||||||
|
func (kq *KeyQuery) GroupBy(field string, fields ...string) *KeyGroupBy {
|
||||||
|
kq.ctx.Fields = append([]string{field}, fields...)
|
||||||
|
grbuild := &KeyGroupBy{build: kq}
|
||||||
|
grbuild.flds = &kq.ctx.Fields
|
||||||
|
grbuild.label = key.Label
|
||||||
|
grbuild.scan = grbuild.Scan
|
||||||
|
return grbuild
|
||||||
|
}
|
||||||
|
|
||||||
|
// Select allows the selection one or more fields/columns for the given query,
|
||||||
|
// instead of selecting all fields in the entity.
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
//
|
||||||
|
// var v []struct {
|
||||||
|
// PublicKey string `json:"publicKey,omitempty"`
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// client.Key.Query().
|
||||||
|
// Select(key.FieldPublicKey).
|
||||||
|
// Scan(ctx, &v)
|
||||||
|
func (kq *KeyQuery) Select(fields ...string) *KeySelect {
|
||||||
|
kq.ctx.Fields = append(kq.ctx.Fields, fields...)
|
||||||
|
sbuild := &KeySelect{KeyQuery: kq}
|
||||||
|
sbuild.label = key.Label
|
||||||
|
sbuild.flds, sbuild.scan = &kq.ctx.Fields, sbuild.Scan
|
||||||
|
return sbuild
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate returns a KeySelect configured with the given aggregations.
|
||||||
|
func (kq *KeyQuery) Aggregate(fns ...AggregateFunc) *KeySelect {
|
||||||
|
return kq.Select().Aggregate(fns...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (kq *KeyQuery) prepareQuery(ctx context.Context) error {
|
||||||
|
for _, inter := range kq.inters {
|
||||||
|
if inter == nil {
|
||||||
|
return fmt.Errorf("ent: uninitialized interceptor (forgotten import ent/runtime?)")
|
||||||
|
}
|
||||||
|
if trv, ok := inter.(Traverser); ok {
|
||||||
|
if err := trv.Traverse(ctx, kq); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, f := range kq.ctx.Fields {
|
||||||
|
if !key.ValidColumn(f) {
|
||||||
|
return &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if kq.path != nil {
|
||||||
|
prev, err := kq.path(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
kq.sql = prev
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (kq *KeyQuery) sqlAll(ctx context.Context, hooks ...queryHook) ([]*Key, error) {
|
||||||
|
var (
|
||||||
|
nodes = []*Key{}
|
||||||
|
withFKs = kq.withFKs
|
||||||
|
_spec = kq.querySpec()
|
||||||
|
loadedTypes = [1]bool{
|
||||||
|
kq.withSigned != nil,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
if withFKs {
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, key.ForeignKeys...)
|
||||||
|
}
|
||||||
|
_spec.ScanValues = func(columns []string) ([]any, error) {
|
||||||
|
return (*Key).scanValues(nil, columns)
|
||||||
|
}
|
||||||
|
_spec.Assign = func(columns []string, values []any) error {
|
||||||
|
node := &Key{config: kq.config}
|
||||||
|
nodes = append(nodes, node)
|
||||||
|
node.Edges.loadedTypes = loadedTypes
|
||||||
|
return node.assignValues(columns, values)
|
||||||
|
}
|
||||||
|
for i := range hooks {
|
||||||
|
hooks[i](ctx, _spec)
|
||||||
|
}
|
||||||
|
if err := sqlgraph.QueryNodes(ctx, kq.driver, _spec); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if len(nodes) == 0 {
|
||||||
|
return nodes, nil
|
||||||
|
}
|
||||||
|
if query := kq.withSigned; query != nil {
|
||||||
|
if err := kq.loadSigned(ctx, query, nodes,
|
||||||
|
func(n *Key) { n.Edges.Signed = []*Transactions{} },
|
||||||
|
func(n *Key, e *Transactions) { n.Edges.Signed = append(n.Edges.Signed, e) }); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nodes, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (kq *KeyQuery) loadSigned(ctx context.Context, query *TransactionsQuery, nodes []*Key, init func(*Key), assign func(*Key, *Transactions)) error {
|
||||||
|
edgeIDs := make([]driver.Value, len(nodes))
|
||||||
|
byID := make(map[int]*Key)
|
||||||
|
nids := make(map[int]map[*Key]struct{})
|
||||||
|
for i, node := range nodes {
|
||||||
|
edgeIDs[i] = node.ID
|
||||||
|
byID[node.ID] = node
|
||||||
|
if init != nil {
|
||||||
|
init(node)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
query.Where(func(s *sql.Selector) {
|
||||||
|
joinT := sql.Table(key.SignedTable)
|
||||||
|
s.Join(joinT).On(s.C(transactions.FieldID), joinT.C(key.SignedPrimaryKey[1]))
|
||||||
|
s.Where(sql.InValues(joinT.C(key.SignedPrimaryKey[0]), edgeIDs...))
|
||||||
|
columns := s.SelectedColumns()
|
||||||
|
s.Select(joinT.C(key.SignedPrimaryKey[0]))
|
||||||
|
s.AppendSelect(columns...)
|
||||||
|
s.SetDistinct(false)
|
||||||
|
})
|
||||||
|
if err := query.prepareQuery(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
qr := QuerierFunc(func(ctx context.Context, q Query) (Value, error) {
|
||||||
|
return query.sqlAll(ctx, func(_ context.Context, spec *sqlgraph.QuerySpec) {
|
||||||
|
assign := spec.Assign
|
||||||
|
values := spec.ScanValues
|
||||||
|
spec.ScanValues = func(columns []string) ([]any, error) {
|
||||||
|
values, err := values(columns[1:])
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return append([]any{new(sql.NullInt64)}, values...), nil
|
||||||
|
}
|
||||||
|
spec.Assign = func(columns []string, values []any) error {
|
||||||
|
outValue := int(values[0].(*sql.NullInt64).Int64)
|
||||||
|
inValue := int(values[1].(*sql.NullInt64).Int64)
|
||||||
|
if nids[inValue] == nil {
|
||||||
|
nids[inValue] = map[*Key]struct{}{byID[outValue]: {}}
|
||||||
|
return assign(columns[1:], values[1:])
|
||||||
|
}
|
||||||
|
nids[inValue][byID[outValue]] = struct{}{}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
neighbors, err := withInterceptors[[]*Transactions](ctx, query, qr, query.inters)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for _, n := range neighbors {
|
||||||
|
nodes, ok := nids[n.ID]
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf(`unexpected "Signed" node returned %v`, n.ID)
|
||||||
|
}
|
||||||
|
for kn := range nodes {
|
||||||
|
assign(kn, n)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (kq *KeyQuery) sqlCount(ctx context.Context) (int, error) {
|
||||||
|
_spec := kq.querySpec()
|
||||||
|
_spec.Node.Columns = kq.ctx.Fields
|
||||||
|
if len(kq.ctx.Fields) > 0 {
|
||||||
|
_spec.Unique = kq.ctx.Unique != nil && *kq.ctx.Unique
|
||||||
|
}
|
||||||
|
return sqlgraph.CountNodes(ctx, kq.driver, _spec)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (kq *KeyQuery) querySpec() *sqlgraph.QuerySpec {
|
||||||
|
_spec := sqlgraph.NewQuerySpec(key.Table, key.Columns, sqlgraph.NewFieldSpec(key.FieldID, field.TypeInt))
|
||||||
|
_spec.From = kq.sql
|
||||||
|
if unique := kq.ctx.Unique; unique != nil {
|
||||||
|
_spec.Unique = *unique
|
||||||
|
} else if kq.path != nil {
|
||||||
|
_spec.Unique = true
|
||||||
|
}
|
||||||
|
if fields := kq.ctx.Fields; len(fields) > 0 {
|
||||||
|
_spec.Node.Columns = make([]string, 0, len(fields))
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, key.FieldID)
|
||||||
|
for i := range fields {
|
||||||
|
if fields[i] != key.FieldID {
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, fields[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if ps := kq.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if limit := kq.ctx.Limit; limit != nil {
|
||||||
|
_spec.Limit = *limit
|
||||||
|
}
|
||||||
|
if offset := kq.ctx.Offset; offset != nil {
|
||||||
|
_spec.Offset = *offset
|
||||||
|
}
|
||||||
|
if ps := kq.order; len(ps) > 0 {
|
||||||
|
_spec.Order = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return _spec
|
||||||
|
}
|
||||||
|
|
||||||
|
func (kq *KeyQuery) sqlQuery(ctx context.Context) *sql.Selector {
|
||||||
|
builder := sql.Dialect(kq.driver.Dialect())
|
||||||
|
t1 := builder.Table(key.Table)
|
||||||
|
columns := kq.ctx.Fields
|
||||||
|
if len(columns) == 0 {
|
||||||
|
columns = key.Columns
|
||||||
|
}
|
||||||
|
selector := builder.Select(t1.Columns(columns...)...).From(t1)
|
||||||
|
if kq.sql != nil {
|
||||||
|
selector = kq.sql
|
||||||
|
selector.Select(selector.Columns(columns...)...)
|
||||||
|
}
|
||||||
|
if kq.ctx.Unique != nil && *kq.ctx.Unique {
|
||||||
|
selector.Distinct()
|
||||||
|
}
|
||||||
|
for _, p := range kq.predicates {
|
||||||
|
p(selector)
|
||||||
|
}
|
||||||
|
for _, p := range kq.order {
|
||||||
|
p(selector)
|
||||||
|
}
|
||||||
|
if offset := kq.ctx.Offset; offset != nil {
|
||||||
|
// limit is mandatory for offset clause. We start
|
||||||
|
// with default value, and override it below if needed.
|
||||||
|
selector.Offset(*offset).Limit(math.MaxInt32)
|
||||||
|
}
|
||||||
|
if limit := kq.ctx.Limit; limit != nil {
|
||||||
|
selector.Limit(*limit)
|
||||||
|
}
|
||||||
|
return selector
|
||||||
|
}
|
||||||
|
|
||||||
|
// KeyGroupBy is the group-by builder for Key entities.
|
||||||
|
type KeyGroupBy struct {
|
||||||
|
selector
|
||||||
|
build *KeyQuery
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate adds the given aggregation functions to the group-by query.
|
||||||
|
func (kgb *KeyGroupBy) Aggregate(fns ...AggregateFunc) *KeyGroupBy {
|
||||||
|
kgb.fns = append(kgb.fns, fns...)
|
||||||
|
return kgb
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scan applies the selector query and scans the result into the given value.
|
||||||
|
func (kgb *KeyGroupBy) Scan(ctx context.Context, v any) error {
|
||||||
|
ctx = setContextOp(ctx, kgb.build.ctx, "GroupBy")
|
||||||
|
if err := kgb.build.prepareQuery(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return scanWithInterceptors[*KeyQuery, *KeyGroupBy](ctx, kgb.build, kgb, kgb.build.inters, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (kgb *KeyGroupBy) sqlScan(ctx context.Context, root *KeyQuery, v any) error {
|
||||||
|
selector := root.sqlQuery(ctx).Select()
|
||||||
|
aggregation := make([]string, 0, len(kgb.fns))
|
||||||
|
for _, fn := range kgb.fns {
|
||||||
|
aggregation = append(aggregation, fn(selector))
|
||||||
|
}
|
||||||
|
if len(selector.SelectedColumns()) == 0 {
|
||||||
|
columns := make([]string, 0, len(*kgb.flds)+len(kgb.fns))
|
||||||
|
for _, f := range *kgb.flds {
|
||||||
|
columns = append(columns, selector.C(f))
|
||||||
|
}
|
||||||
|
columns = append(columns, aggregation...)
|
||||||
|
selector.Select(columns...)
|
||||||
|
}
|
||||||
|
selector.GroupBy(selector.Columns(*kgb.flds...)...)
|
||||||
|
if err := selector.Err(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
rows := &sql.Rows{}
|
||||||
|
query, args := selector.Query()
|
||||||
|
if err := kgb.build.driver.Query(ctx, query, args, rows); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
return sql.ScanSlice(rows, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
// KeySelect is the builder for selecting fields of Key entities.
|
||||||
|
type KeySelect struct {
|
||||||
|
*KeyQuery
|
||||||
|
selector
|
||||||
|
}
|
||||||
|
|
||||||
|
// Aggregate adds the given aggregation functions to the selector query.
|
||||||
|
func (ks *KeySelect) Aggregate(fns ...AggregateFunc) *KeySelect {
|
||||||
|
ks.fns = append(ks.fns, fns...)
|
||||||
|
return ks
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scan applies the selector query and scans the result into the given value.
|
||||||
|
func (ks *KeySelect) Scan(ctx context.Context, v any) error {
|
||||||
|
ctx = setContextOp(ctx, ks.ctx, "Select")
|
||||||
|
if err := ks.prepareQuery(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return scanWithInterceptors[*KeyQuery, *KeySelect](ctx, ks.KeyQuery, ks, ks.inters, v)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ks *KeySelect) sqlScan(ctx context.Context, root *KeyQuery, v any) error {
|
||||||
|
selector := root.sqlQuery(ctx)
|
||||||
|
aggregation := make([]string, 0, len(ks.fns))
|
||||||
|
for _, fn := range ks.fns {
|
||||||
|
aggregation = append(aggregation, fn(selector))
|
||||||
|
}
|
||||||
|
switch n := len(*ks.selector.flds); {
|
||||||
|
case n == 0 && len(aggregation) > 0:
|
||||||
|
selector.Select(aggregation...)
|
||||||
|
case n != 0 && len(aggregation) > 0:
|
||||||
|
selector.AppendSelect(aggregation...)
|
||||||
|
}
|
||||||
|
rows := &sql.Rows{}
|
||||||
|
query, args := selector.Query()
|
||||||
|
if err := ks.driver.Query(ctx, query, args, rows); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
return sql.ScanSlice(rows, v)
|
||||||
|
}
|
496
z2/backend/ent/key_update.go
Normal file
@ -0,0 +1,496 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"thesis/ent/key"
|
||||||
|
"thesis/ent/predicate"
|
||||||
|
"thesis/ent/transactions"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// KeyUpdate is the builder for updating Key entities.
|
||||||
|
type KeyUpdate struct {
|
||||||
|
config
|
||||||
|
hooks []Hook
|
||||||
|
mutation *KeyMutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the KeyUpdate builder.
|
||||||
|
func (ku *KeyUpdate) Where(ps ...predicate.Key) *KeyUpdate {
|
||||||
|
ku.mutation.Where(ps...)
|
||||||
|
return ku
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetPublicKey sets the "publicKey" field.
|
||||||
|
func (ku *KeyUpdate) SetPublicKey(s string) *KeyUpdate {
|
||||||
|
ku.mutation.SetPublicKey(s)
|
||||||
|
return ku
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillablePublicKey sets the "publicKey" field if the given value is not nil.
|
||||||
|
func (ku *KeyUpdate) SetNillablePublicKey(s *string) *KeyUpdate {
|
||||||
|
if s != nil {
|
||||||
|
ku.SetPublicKey(*s)
|
||||||
|
}
|
||||||
|
return ku
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetOwner sets the "Owner" field.
|
||||||
|
func (ku *KeyUpdate) SetOwner(s string) *KeyUpdate {
|
||||||
|
ku.mutation.SetOwner(s)
|
||||||
|
return ku
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableOwner sets the "Owner" field if the given value is not nil.
|
||||||
|
func (ku *KeyUpdate) SetNillableOwner(s *string) *KeyUpdate {
|
||||||
|
if s != nil {
|
||||||
|
ku.SetOwner(*s)
|
||||||
|
}
|
||||||
|
return ku
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetTrustScore sets the "trustScore" field.
|
||||||
|
func (ku *KeyUpdate) SetTrustScore(f float64) *KeyUpdate {
|
||||||
|
ku.mutation.ResetTrustScore()
|
||||||
|
ku.mutation.SetTrustScore(f)
|
||||||
|
return ku
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableTrustScore sets the "trustScore" field if the given value is not nil.
|
||||||
|
func (ku *KeyUpdate) SetNillableTrustScore(f *float64) *KeyUpdate {
|
||||||
|
if f != nil {
|
||||||
|
ku.SetTrustScore(*f)
|
||||||
|
}
|
||||||
|
return ku
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddTrustScore adds f to the "trustScore" field.
|
||||||
|
func (ku *KeyUpdate) AddTrustScore(f float64) *KeyUpdate {
|
||||||
|
ku.mutation.AddTrustScore(f)
|
||||||
|
return ku
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddSignedIDs adds the "Signed" edge to the Transactions entity by IDs.
|
||||||
|
func (ku *KeyUpdate) AddSignedIDs(ids ...int) *KeyUpdate {
|
||||||
|
ku.mutation.AddSignedIDs(ids...)
|
||||||
|
return ku
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddSigned adds the "Signed" edges to the Transactions entity.
|
||||||
|
func (ku *KeyUpdate) AddSigned(t ...*Transactions) *KeyUpdate {
|
||||||
|
ids := make([]int, len(t))
|
||||||
|
for i := range t {
|
||||||
|
ids[i] = t[i].ID
|
||||||
|
}
|
||||||
|
return ku.AddSignedIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mutation returns the KeyMutation object of the builder.
|
||||||
|
func (ku *KeyUpdate) Mutation() *KeyMutation {
|
||||||
|
return ku.mutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearSigned clears all "Signed" edges to the Transactions entity.
|
||||||
|
func (ku *KeyUpdate) ClearSigned() *KeyUpdate {
|
||||||
|
ku.mutation.ClearSigned()
|
||||||
|
return ku
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveSignedIDs removes the "Signed" edge to Transactions entities by IDs.
|
||||||
|
func (ku *KeyUpdate) RemoveSignedIDs(ids ...int) *KeyUpdate {
|
||||||
|
ku.mutation.RemoveSignedIDs(ids...)
|
||||||
|
return ku
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveSigned removes "Signed" edges to Transactions entities.
|
||||||
|
func (ku *KeyUpdate) RemoveSigned(t ...*Transactions) *KeyUpdate {
|
||||||
|
ids := make([]int, len(t))
|
||||||
|
for i := range t {
|
||||||
|
ids[i] = t[i].ID
|
||||||
|
}
|
||||||
|
return ku.RemoveSignedIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save executes the query and returns the number of nodes affected by the update operation.
|
||||||
|
func (ku *KeyUpdate) Save(ctx context.Context) (int, error) {
|
||||||
|
return withHooks(ctx, ku.sqlSave, ku.mutation, ku.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX is like Save, but panics if an error occurs.
|
||||||
|
func (ku *KeyUpdate) SaveX(ctx context.Context) int {
|
||||||
|
affected, err := ku.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return affected
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (ku *KeyUpdate) Exec(ctx context.Context) error {
|
||||||
|
_, err := ku.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (ku *KeyUpdate) ExecX(ctx context.Context) {
|
||||||
|
if err := ku.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// check runs all checks and user-defined validators on the builder.
|
||||||
|
func (ku *KeyUpdate) check() error {
|
||||||
|
if v, ok := ku.mutation.PublicKey(); ok {
|
||||||
|
if err := key.PublicKeyValidator(v); err != nil {
|
||||||
|
return &ValidationError{Name: "publicKey", err: fmt.Errorf(`ent: validator failed for field "Key.publicKey": %w`, err)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if v, ok := ku.mutation.Owner(); ok {
|
||||||
|
if err := key.OwnerValidator(v); err != nil {
|
||||||
|
return &ValidationError{Name: "Owner", err: fmt.Errorf(`ent: validator failed for field "Key.Owner": %w`, err)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ku *KeyUpdate) sqlSave(ctx context.Context) (n int, err error) {
|
||||||
|
if err := ku.check(); err != nil {
|
||||||
|
return n, err
|
||||||
|
}
|
||||||
|
_spec := sqlgraph.NewUpdateSpec(key.Table, key.Columns, sqlgraph.NewFieldSpec(key.FieldID, field.TypeInt))
|
||||||
|
if ps := ku.mutation.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if value, ok := ku.mutation.PublicKey(); ok {
|
||||||
|
_spec.SetField(key.FieldPublicKey, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if value, ok := ku.mutation.Owner(); ok {
|
||||||
|
_spec.SetField(key.FieldOwner, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if value, ok := ku.mutation.TrustScore(); ok {
|
||||||
|
_spec.SetField(key.FieldTrustScore, field.TypeFloat64, value)
|
||||||
|
}
|
||||||
|
if value, ok := ku.mutation.AddedTrustScore(); ok {
|
||||||
|
_spec.AddField(key.FieldTrustScore, field.TypeFloat64, value)
|
||||||
|
}
|
||||||
|
if ku.mutation.SignedCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: key.SignedTable,
|
||||||
|
Columns: key.SignedPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := ku.mutation.RemovedSignedIDs(); len(nodes) > 0 && !ku.mutation.SignedCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: key.SignedTable,
|
||||||
|
Columns: key.SignedPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := ku.mutation.SignedIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: key.SignedTable,
|
||||||
|
Columns: key.SignedPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
|
if n, err = sqlgraph.UpdateNodes(ctx, ku.driver, _spec); err != nil {
|
||||||
|
if _, ok := err.(*sqlgraph.NotFoundError); ok {
|
||||||
|
err = &NotFoundError{key.Label}
|
||||||
|
} else if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
ku.mutation.done = true
|
||||||
|
return n, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// KeyUpdateOne is the builder for updating a single Key entity.
|
||||||
|
type KeyUpdateOne struct {
|
||||||
|
config
|
||||||
|
fields []string
|
||||||
|
hooks []Hook
|
||||||
|
mutation *KeyMutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetPublicKey sets the "publicKey" field.
|
||||||
|
func (kuo *KeyUpdateOne) SetPublicKey(s string) *KeyUpdateOne {
|
||||||
|
kuo.mutation.SetPublicKey(s)
|
||||||
|
return kuo
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillablePublicKey sets the "publicKey" field if the given value is not nil.
|
||||||
|
func (kuo *KeyUpdateOne) SetNillablePublicKey(s *string) *KeyUpdateOne {
|
||||||
|
if s != nil {
|
||||||
|
kuo.SetPublicKey(*s)
|
||||||
|
}
|
||||||
|
return kuo
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetOwner sets the "Owner" field.
|
||||||
|
func (kuo *KeyUpdateOne) SetOwner(s string) *KeyUpdateOne {
|
||||||
|
kuo.mutation.SetOwner(s)
|
||||||
|
return kuo
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableOwner sets the "Owner" field if the given value is not nil.
|
||||||
|
func (kuo *KeyUpdateOne) SetNillableOwner(s *string) *KeyUpdateOne {
|
||||||
|
if s != nil {
|
||||||
|
kuo.SetOwner(*s)
|
||||||
|
}
|
||||||
|
return kuo
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetTrustScore sets the "trustScore" field.
|
||||||
|
func (kuo *KeyUpdateOne) SetTrustScore(f float64) *KeyUpdateOne {
|
||||||
|
kuo.mutation.ResetTrustScore()
|
||||||
|
kuo.mutation.SetTrustScore(f)
|
||||||
|
return kuo
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetNillableTrustScore sets the "trustScore" field if the given value is not nil.
|
||||||
|
func (kuo *KeyUpdateOne) SetNillableTrustScore(f *float64) *KeyUpdateOne {
|
||||||
|
if f != nil {
|
||||||
|
kuo.SetTrustScore(*f)
|
||||||
|
}
|
||||||
|
return kuo
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddTrustScore adds f to the "trustScore" field.
|
||||||
|
func (kuo *KeyUpdateOne) AddTrustScore(f float64) *KeyUpdateOne {
|
||||||
|
kuo.mutation.AddTrustScore(f)
|
||||||
|
return kuo
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddSignedIDs adds the "Signed" edge to the Transactions entity by IDs.
|
||||||
|
func (kuo *KeyUpdateOne) AddSignedIDs(ids ...int) *KeyUpdateOne {
|
||||||
|
kuo.mutation.AddSignedIDs(ids...)
|
||||||
|
return kuo
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddSigned adds the "Signed" edges to the Transactions entity.
|
||||||
|
func (kuo *KeyUpdateOne) AddSigned(t ...*Transactions) *KeyUpdateOne {
|
||||||
|
ids := make([]int, len(t))
|
||||||
|
for i := range t {
|
||||||
|
ids[i] = t[i].ID
|
||||||
|
}
|
||||||
|
return kuo.AddSignedIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mutation returns the KeyMutation object of the builder.
|
||||||
|
func (kuo *KeyUpdateOne) Mutation() *KeyMutation {
|
||||||
|
return kuo.mutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// ClearSigned clears all "Signed" edges to the Transactions entity.
|
||||||
|
func (kuo *KeyUpdateOne) ClearSigned() *KeyUpdateOne {
|
||||||
|
kuo.mutation.ClearSigned()
|
||||||
|
return kuo
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveSignedIDs removes the "Signed" edge to Transactions entities by IDs.
|
||||||
|
func (kuo *KeyUpdateOne) RemoveSignedIDs(ids ...int) *KeyUpdateOne {
|
||||||
|
kuo.mutation.RemoveSignedIDs(ids...)
|
||||||
|
return kuo
|
||||||
|
}
|
||||||
|
|
||||||
|
// RemoveSigned removes "Signed" edges to Transactions entities.
|
||||||
|
func (kuo *KeyUpdateOne) RemoveSigned(t ...*Transactions) *KeyUpdateOne {
|
||||||
|
ids := make([]int, len(t))
|
||||||
|
for i := range t {
|
||||||
|
ids[i] = t[i].ID
|
||||||
|
}
|
||||||
|
return kuo.RemoveSignedIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Where appends a list predicates to the KeyUpdate builder.
|
||||||
|
func (kuo *KeyUpdateOne) Where(ps ...predicate.Key) *KeyUpdateOne {
|
||||||
|
kuo.mutation.Where(ps...)
|
||||||
|
return kuo
|
||||||
|
}
|
||||||
|
|
||||||
|
// Select allows selecting one or more fields (columns) of the returned entity.
|
||||||
|
// The default is selecting all fields defined in the entity schema.
|
||||||
|
func (kuo *KeyUpdateOne) Select(field string, fields ...string) *KeyUpdateOne {
|
||||||
|
kuo.fields = append([]string{field}, fields...)
|
||||||
|
return kuo
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save executes the query and returns the updated Key entity.
|
||||||
|
func (kuo *KeyUpdateOne) Save(ctx context.Context) (*Key, error) {
|
||||||
|
return withHooks(ctx, kuo.sqlSave, kuo.mutation, kuo.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX is like Save, but panics if an error occurs.
|
||||||
|
func (kuo *KeyUpdateOne) SaveX(ctx context.Context) *Key {
|
||||||
|
node, err := kuo.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return node
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query on the entity.
|
||||||
|
func (kuo *KeyUpdateOne) Exec(ctx context.Context) error {
|
||||||
|
_, err := kuo.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (kuo *KeyUpdateOne) ExecX(ctx context.Context) {
|
||||||
|
if err := kuo.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// check runs all checks and user-defined validators on the builder.
|
||||||
|
func (kuo *KeyUpdateOne) check() error {
|
||||||
|
if v, ok := kuo.mutation.PublicKey(); ok {
|
||||||
|
if err := key.PublicKeyValidator(v); err != nil {
|
||||||
|
return &ValidationError{Name: "publicKey", err: fmt.Errorf(`ent: validator failed for field "Key.publicKey": %w`, err)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if v, ok := kuo.mutation.Owner(); ok {
|
||||||
|
if err := key.OwnerValidator(v); err != nil {
|
||||||
|
return &ValidationError{Name: "Owner", err: fmt.Errorf(`ent: validator failed for field "Key.Owner": %w`, err)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (kuo *KeyUpdateOne) sqlSave(ctx context.Context) (_node *Key, err error) {
|
||||||
|
if err := kuo.check(); err != nil {
|
||||||
|
return _node, err
|
||||||
|
}
|
||||||
|
_spec := sqlgraph.NewUpdateSpec(key.Table, key.Columns, sqlgraph.NewFieldSpec(key.FieldID, field.TypeInt))
|
||||||
|
id, ok := kuo.mutation.ID()
|
||||||
|
if !ok {
|
||||||
|
return nil, &ValidationError{Name: "id", err: errors.New(`ent: missing "Key.id" for update`)}
|
||||||
|
}
|
||||||
|
_spec.Node.ID.Value = id
|
||||||
|
if fields := kuo.fields; len(fields) > 0 {
|
||||||
|
_spec.Node.Columns = make([]string, 0, len(fields))
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, key.FieldID)
|
||||||
|
for _, f := range fields {
|
||||||
|
if !key.ValidColumn(f) {
|
||||||
|
return nil, &ValidationError{Name: f, err: fmt.Errorf("ent: invalid field %q for query", f)}
|
||||||
|
}
|
||||||
|
if f != key.FieldID {
|
||||||
|
_spec.Node.Columns = append(_spec.Node.Columns, f)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if ps := kuo.mutation.predicates; len(ps) > 0 {
|
||||||
|
_spec.Predicate = func(selector *sql.Selector) {
|
||||||
|
for i := range ps {
|
||||||
|
ps[i](selector)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if value, ok := kuo.mutation.PublicKey(); ok {
|
||||||
|
_spec.SetField(key.FieldPublicKey, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if value, ok := kuo.mutation.Owner(); ok {
|
||||||
|
_spec.SetField(key.FieldOwner, field.TypeString, value)
|
||||||
|
}
|
||||||
|
if value, ok := kuo.mutation.TrustScore(); ok {
|
||||||
|
_spec.SetField(key.FieldTrustScore, field.TypeFloat64, value)
|
||||||
|
}
|
||||||
|
if value, ok := kuo.mutation.AddedTrustScore(); ok {
|
||||||
|
_spec.AddField(key.FieldTrustScore, field.TypeFloat64, value)
|
||||||
|
}
|
||||||
|
if kuo.mutation.SignedCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: key.SignedTable,
|
||||||
|
Columns: key.SignedPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := kuo.mutation.RemovedSignedIDs(); len(nodes) > 0 && !kuo.mutation.SignedCleared() {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: key.SignedTable,
|
||||||
|
Columns: key.SignedPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Clear = append(_spec.Edges.Clear, edge)
|
||||||
|
}
|
||||||
|
if nodes := kuo.mutation.SignedIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: false,
|
||||||
|
Table: key.SignedTable,
|
||||||
|
Columns: key.SignedPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges.Add = append(_spec.Edges.Add, edge)
|
||||||
|
}
|
||||||
|
_node = &Key{config: kuo.config}
|
||||||
|
_spec.Assign = _node.assignValues
|
||||||
|
_spec.ScanValues = _node.scanValues
|
||||||
|
if err = sqlgraph.UpdateNode(ctx, kuo.driver, _spec); err != nil {
|
||||||
|
if _, ok := err.(*sqlgraph.NotFoundError); ok {
|
||||||
|
err = &NotFoundError{key.Label}
|
||||||
|
} else if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
kuo.mutation.done = true
|
||||||
|
return _node, nil
|
||||||
|
}
|
64
z2/backend/ent/migrate/migrate.go
Normal file
@ -0,0 +1,64 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package migrate
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect"
|
||||||
|
"entgo.io/ent/dialect/sql/schema"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
// WithGlobalUniqueID sets the universal ids options to the migration.
|
||||||
|
// If this option is enabled, ent migration will allocate a 1<<32 range
|
||||||
|
// for the ids of each entity (table).
|
||||||
|
// Note that this option cannot be applied on tables that already exist.
|
||||||
|
WithGlobalUniqueID = schema.WithGlobalUniqueID
|
||||||
|
// WithDropColumn sets the drop column option to the migration.
|
||||||
|
// If this option is enabled, ent migration will drop old columns
|
||||||
|
// that were used for both fields and edges. This defaults to false.
|
||||||
|
WithDropColumn = schema.WithDropColumn
|
||||||
|
// WithDropIndex sets the drop index option to the migration.
|
||||||
|
// If this option is enabled, ent migration will drop old indexes
|
||||||
|
// that were defined in the schema. This defaults to false.
|
||||||
|
// Note that unique constraints are defined using `UNIQUE INDEX`,
|
||||||
|
// and therefore, it's recommended to enable this option to get more
|
||||||
|
// flexibility in the schema changes.
|
||||||
|
WithDropIndex = schema.WithDropIndex
|
||||||
|
// WithForeignKeys enables creating foreign-key in schema DDL. This defaults to true.
|
||||||
|
WithForeignKeys = schema.WithForeignKeys
|
||||||
|
)
|
||||||
|
|
||||||
|
// Schema is the API for creating, migrating and dropping a schema.
|
||||||
|
type Schema struct {
|
||||||
|
drv dialect.Driver
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewSchema creates a new schema client.
|
||||||
|
func NewSchema(drv dialect.Driver) *Schema { return &Schema{drv: drv} }
|
||||||
|
|
||||||
|
// Create creates all schema resources.
|
||||||
|
func (s *Schema) Create(ctx context.Context, opts ...schema.MigrateOption) error {
|
||||||
|
return Create(ctx, s, Tables, opts...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create creates all table resources using the given schema driver.
|
||||||
|
func Create(ctx context.Context, s *Schema, tables []*schema.Table, opts ...schema.MigrateOption) error {
|
||||||
|
migrate, err := schema.NewMigrate(s.drv, opts...)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("ent/migrate: %w", err)
|
||||||
|
}
|
||||||
|
return migrate.Create(ctx, tables...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// WriteTo writes the schema changes to w instead of running them against the database.
|
||||||
|
//
|
||||||
|
// if err := client.Schema.WriteTo(context.Background(), os.Stdout); err != nil {
|
||||||
|
// log.Fatal(err)
|
||||||
|
// }
|
||||||
|
func (s *Schema) WriteTo(ctx context.Context, w io.Writer, opts ...schema.MigrateOption) error {
|
||||||
|
return Create(ctx, &Schema{drv: &schema.WriteDriver{Writer: w, Driver: s.drv}}, Tables, opts...)
|
||||||
|
}
|
177
z2/backend/ent/migrate/schema.go
Normal file
@ -0,0 +1,177 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package migrate
|
||||||
|
|
||||||
|
import (
|
||||||
|
"entgo.io/ent/dialect/sql/schema"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
// BlocksColumns holds the columns for the "blocks" table.
|
||||||
|
BlocksColumns = []*schema.Column{
|
||||||
|
{Name: "id", Type: field.TypeInt, Increment: true},
|
||||||
|
{Name: "hash", Type: field.TypeString, Unique: true},
|
||||||
|
{Name: "length", Type: field.TypeInt},
|
||||||
|
{Name: "previous_hash", Type: field.TypeString, Unique: true},
|
||||||
|
}
|
||||||
|
// BlocksTable holds the schema information for the "blocks" table.
|
||||||
|
BlocksTable = &schema.Table{
|
||||||
|
Name: "blocks",
|
||||||
|
Columns: BlocksColumns,
|
||||||
|
PrimaryKey: []*schema.Column{BlocksColumns[0]},
|
||||||
|
}
|
||||||
|
// KeysColumns holds the columns for the "keys" table.
|
||||||
|
KeysColumns = []*schema.Column{
|
||||||
|
{Name: "id", Type: field.TypeInt, Increment: true},
|
||||||
|
{Name: "public_key", Type: field.TypeString, Unique: true},
|
||||||
|
{Name: "owner", Type: field.TypeString},
|
||||||
|
{Name: "trust_score", Type: field.TypeFloat64, Default: 0.2},
|
||||||
|
{Name: "validators_key", Type: field.TypeInt, Nullable: true},
|
||||||
|
{Name: "white_list_account", Type: field.TypeInt, Nullable: true},
|
||||||
|
}
|
||||||
|
// KeysTable holds the schema information for the "keys" table.
|
||||||
|
KeysTable = &schema.Table{
|
||||||
|
Name: "keys",
|
||||||
|
Columns: KeysColumns,
|
||||||
|
PrimaryKey: []*schema.Column{KeysColumns[0]},
|
||||||
|
ForeignKeys: []*schema.ForeignKey{
|
||||||
|
{
|
||||||
|
Symbol: "keys_validators_key",
|
||||||
|
Columns: []*schema.Column{KeysColumns[4]},
|
||||||
|
RefColumns: []*schema.Column{ValidatorsColumns[0]},
|
||||||
|
OnDelete: schema.SetNull,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Symbol: "keys_white_lists_Account",
|
||||||
|
Columns: []*schema.Column{KeysColumns[5]},
|
||||||
|
RefColumns: []*schema.Column{WhiteListsColumns[0]},
|
||||||
|
OnDelete: schema.SetNull,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
// TransactionsColumns holds the columns for the "transactions" table.
|
||||||
|
TransactionsColumns = []*schema.Column{
|
||||||
|
{Name: "id", Type: field.TypeInt, Increment: true},
|
||||||
|
{Name: "type", Type: field.TypeInt},
|
||||||
|
{Name: "timestamp", Type: field.TypeInt},
|
||||||
|
{Name: "comment", Type: field.TypeString},
|
||||||
|
{Name: "content", Type: field.TypeBytes},
|
||||||
|
{Name: "hash", Type: field.TypeString, Unique: true},
|
||||||
|
{Name: "signature", Type: field.TypeString, Unique: true},
|
||||||
|
}
|
||||||
|
// TransactionsTable holds the schema information for the "transactions" table.
|
||||||
|
TransactionsTable = &schema.Table{
|
||||||
|
Name: "transactions",
|
||||||
|
Columns: TransactionsColumns,
|
||||||
|
PrimaryKey: []*schema.Column{TransactionsColumns[0]},
|
||||||
|
}
|
||||||
|
// ValidatorsColumns holds the columns for the "validators" table.
|
||||||
|
ValidatorsColumns = []*schema.Column{
|
||||||
|
{Name: "id", Type: field.TypeInt, Increment: true},
|
||||||
|
{Name: "facilitator", Type: field.TypeString},
|
||||||
|
{Name: "blocks_caster", Type: field.TypeInt, Nullable: true},
|
||||||
|
{Name: "white_list_sponsor", Type: field.TypeInt, Nullable: true},
|
||||||
|
}
|
||||||
|
// ValidatorsTable holds the schema information for the "validators" table.
|
||||||
|
ValidatorsTable = &schema.Table{
|
||||||
|
Name: "validators",
|
||||||
|
Columns: ValidatorsColumns,
|
||||||
|
PrimaryKey: []*schema.Column{ValidatorsColumns[0]},
|
||||||
|
ForeignKeys: []*schema.ForeignKey{
|
||||||
|
{
|
||||||
|
Symbol: "validators_blocks_Caster",
|
||||||
|
Columns: []*schema.Column{ValidatorsColumns[2]},
|
||||||
|
RefColumns: []*schema.Column{BlocksColumns[0]},
|
||||||
|
OnDelete: schema.SetNull,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Symbol: "validators_white_lists_Sponsor",
|
||||||
|
Columns: []*schema.Column{ValidatorsColumns[3]},
|
||||||
|
RefColumns: []*schema.Column{WhiteListsColumns[0]},
|
||||||
|
OnDelete: schema.SetNull,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
// WhiteListsColumns holds the columns for the "white_lists" table.
|
||||||
|
WhiteListsColumns = []*schema.Column{
|
||||||
|
{Name: "id", Type: field.TypeInt, Increment: true},
|
||||||
|
}
|
||||||
|
// WhiteListsTable holds the schema information for the "white_lists" table.
|
||||||
|
WhiteListsTable = &schema.Table{
|
||||||
|
Name: "white_lists",
|
||||||
|
Columns: WhiteListsColumns,
|
||||||
|
PrimaryKey: []*schema.Column{WhiteListsColumns[0]},
|
||||||
|
}
|
||||||
|
// BlocksMinedTxsColumns holds the columns for the "blocks_MinedTxs" table.
|
||||||
|
BlocksMinedTxsColumns = []*schema.Column{
|
||||||
|
{Name: "blocks_id", Type: field.TypeInt},
|
||||||
|
{Name: "transactions_id", Type: field.TypeInt},
|
||||||
|
}
|
||||||
|
// BlocksMinedTxsTable holds the schema information for the "blocks_MinedTxs" table.
|
||||||
|
BlocksMinedTxsTable = &schema.Table{
|
||||||
|
Name: "blocks_MinedTxs",
|
||||||
|
Columns: BlocksMinedTxsColumns,
|
||||||
|
PrimaryKey: []*schema.Column{BlocksMinedTxsColumns[0], BlocksMinedTxsColumns[1]},
|
||||||
|
ForeignKeys: []*schema.ForeignKey{
|
||||||
|
{
|
||||||
|
Symbol: "blocks_MinedTxs_blocks_id",
|
||||||
|
Columns: []*schema.Column{BlocksMinedTxsColumns[0]},
|
||||||
|
RefColumns: []*schema.Column{BlocksColumns[0]},
|
||||||
|
OnDelete: schema.Cascade,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Symbol: "blocks_MinedTxs_transactions_id",
|
||||||
|
Columns: []*schema.Column{BlocksMinedTxsColumns[1]},
|
||||||
|
RefColumns: []*schema.Column{TransactionsColumns[0]},
|
||||||
|
OnDelete: schema.Cascade,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
// KeySignedColumns holds the columns for the "key_Signed" table.
|
||||||
|
KeySignedColumns = []*schema.Column{
|
||||||
|
{Name: "key_id", Type: field.TypeInt},
|
||||||
|
{Name: "transactions_id", Type: field.TypeInt},
|
||||||
|
}
|
||||||
|
// KeySignedTable holds the schema information for the "key_Signed" table.
|
||||||
|
KeySignedTable = &schema.Table{
|
||||||
|
Name: "key_Signed",
|
||||||
|
Columns: KeySignedColumns,
|
||||||
|
PrimaryKey: []*schema.Column{KeySignedColumns[0], KeySignedColumns[1]},
|
||||||
|
ForeignKeys: []*schema.ForeignKey{
|
||||||
|
{
|
||||||
|
Symbol: "key_Signed_key_id",
|
||||||
|
Columns: []*schema.Column{KeySignedColumns[0]},
|
||||||
|
RefColumns: []*schema.Column{KeysColumns[0]},
|
||||||
|
OnDelete: schema.Cascade,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Symbol: "key_Signed_transactions_id",
|
||||||
|
Columns: []*schema.Column{KeySignedColumns[1]},
|
||||||
|
RefColumns: []*schema.Column{TransactionsColumns[0]},
|
||||||
|
OnDelete: schema.Cascade,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
// Tables holds all the tables in the schema.
|
||||||
|
Tables = []*schema.Table{
|
||||||
|
BlocksTable,
|
||||||
|
KeysTable,
|
||||||
|
TransactionsTable,
|
||||||
|
ValidatorsTable,
|
||||||
|
WhiteListsTable,
|
||||||
|
BlocksMinedTxsTable,
|
||||||
|
KeySignedTable,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
KeysTable.ForeignKeys[0].RefTable = ValidatorsTable
|
||||||
|
KeysTable.ForeignKeys[1].RefTable = WhiteListsTable
|
||||||
|
ValidatorsTable.ForeignKeys[0].RefTable = BlocksTable
|
||||||
|
ValidatorsTable.ForeignKeys[1].RefTable = WhiteListsTable
|
||||||
|
BlocksMinedTxsTable.ForeignKeys[0].RefTable = BlocksTable
|
||||||
|
BlocksMinedTxsTable.ForeignKeys[1].RefTable = TransactionsTable
|
||||||
|
KeySignedTable.ForeignKeys[0].RefTable = KeysTable
|
||||||
|
KeySignedTable.ForeignKeys[1].RefTable = TransactionsTable
|
||||||
|
}
|
2950
z2/backend/ent/mutation.go
Normal file
22
z2/backend/ent/predicate/predicate.go
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package predicate
|
||||||
|
|
||||||
|
import (
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Blocks is the predicate function for blocks builders.
|
||||||
|
type Blocks func(*sql.Selector)
|
||||||
|
|
||||||
|
// Key is the predicate function for key builders.
|
||||||
|
type Key func(*sql.Selector)
|
||||||
|
|
||||||
|
// Transactions is the predicate function for transactions builders.
|
||||||
|
type Transactions func(*sql.Selector)
|
||||||
|
|
||||||
|
// Validators is the predicate function for validators builders.
|
||||||
|
type Validators func(*sql.Selector)
|
||||||
|
|
||||||
|
// WhiteList is the predicate function for whitelist builders.
|
||||||
|
type WhiteList func(*sql.Selector)
|
35
z2/backend/ent/runtime.go
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"thesis/ent/key"
|
||||||
|
"thesis/ent/schema"
|
||||||
|
"thesis/ent/validators"
|
||||||
|
)
|
||||||
|
|
||||||
|
// The init function reads all schema descriptors with runtime code
|
||||||
|
// (default values, validators, hooks and policies) and stitches it
|
||||||
|
// to their package variables.
|
||||||
|
func init() {
|
||||||
|
keyFields := schema.Key{}.Fields()
|
||||||
|
_ = keyFields
|
||||||
|
// keyDescPublicKey is the schema descriptor for publicKey field.
|
||||||
|
keyDescPublicKey := keyFields[0].Descriptor()
|
||||||
|
// key.PublicKeyValidator is a validator for the "publicKey" field. It is called by the builders before save.
|
||||||
|
key.PublicKeyValidator = keyDescPublicKey.Validators[0].(func(string) error)
|
||||||
|
// keyDescOwner is the schema descriptor for Owner field.
|
||||||
|
keyDescOwner := keyFields[1].Descriptor()
|
||||||
|
// key.OwnerValidator is a validator for the "Owner" field. It is called by the builders before save.
|
||||||
|
key.OwnerValidator = keyDescOwner.Validators[0].(func(string) error)
|
||||||
|
// keyDescTrustScore is the schema descriptor for trustScore field.
|
||||||
|
keyDescTrustScore := keyFields[2].Descriptor()
|
||||||
|
// key.DefaultTrustScore holds the default value on creation for the trustScore field.
|
||||||
|
key.DefaultTrustScore = keyDescTrustScore.Default.(float64)
|
||||||
|
validatorsFields := schema.Validators{}.Fields()
|
||||||
|
_ = validatorsFields
|
||||||
|
// validatorsDescFacilitator is the schema descriptor for facilitator field.
|
||||||
|
validatorsDescFacilitator := validatorsFields[0].Descriptor()
|
||||||
|
// validators.FacilitatorValidator is a validator for the "facilitator" field. It is called by the builders before save.
|
||||||
|
validators.FacilitatorValidator = validatorsDescFacilitator.Validators[0].(func(string) error)
|
||||||
|
}
|
10
z2/backend/ent/runtime/runtime.go
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package runtime
|
||||||
|
|
||||||
|
// The schema-stitching logic is generated in thesis/ent/runtime.go
|
||||||
|
|
||||||
|
const (
|
||||||
|
Version = "v0.13.1" // Version of ent codegen.
|
||||||
|
Sum = "h1:uD8QwN1h6SNphdCCzmkMN3feSUzNnVvV/WIkHKMbzOE=" // Sum of ent codegen.
|
||||||
|
)
|
30
z2/backend/ent/schema/blocks.go
Normal file
@ -0,0 +1,30 @@
|
|||||||
|
package schema
|
||||||
|
|
||||||
|
import (
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/schema/edge"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Key holds the schema definition for the Key entity.
|
||||||
|
type Blocks struct {
|
||||||
|
ent.Schema
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fields of the Key.
|
||||||
|
func (Blocks) Fields() []ent.Field {
|
||||||
|
return []ent.Field{
|
||||||
|
field.String("hash").Unique(),
|
||||||
|
field.Int("id").Unique(),
|
||||||
|
field.Int("length"),
|
||||||
|
field.String("previousHash").Unique(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Edges of the Key.
|
||||||
|
func (Blocks) Edges() []ent.Edge {
|
||||||
|
return []ent.Edge{
|
||||||
|
edge.To("Caster", Validators.Type),
|
||||||
|
edge.To("MinedTxs", Transactions.Type),
|
||||||
|
}
|
||||||
|
}
|
28
z2/backend/ent/schema/key.go
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
package schema
|
||||||
|
|
||||||
|
import (
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/schema/edge"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Key holds the schema definition for the Key entity.
|
||||||
|
type Key struct {
|
||||||
|
ent.Schema
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fields of the Key.
|
||||||
|
func (Key) Fields() []ent.Field {
|
||||||
|
return []ent.Field{
|
||||||
|
field.String("publicKey").NotEmpty().Unique(),
|
||||||
|
field.String("Owner").NotEmpty(),
|
||||||
|
field.Float("trustScore").Default(0.2),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Edges of the Key.
|
||||||
|
func (Key) Edges() []ent.Edge {
|
||||||
|
return []ent.Edge{
|
||||||
|
edge.To("Signed", Transactions.Type),
|
||||||
|
}
|
||||||
|
}
|
32
z2/backend/ent/schema/transactions.go
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
package schema
|
||||||
|
|
||||||
|
import (
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/schema/edge"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Transactions holds the schema definition for the Transactions entity.
|
||||||
|
type Transactions struct {
|
||||||
|
ent.Schema
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fields of the Transactions.
|
||||||
|
func (Transactions) Fields() []ent.Field {
|
||||||
|
return []ent.Field{
|
||||||
|
field.Int("type"),
|
||||||
|
field.Int("timestamp"),
|
||||||
|
field.String("comment"),
|
||||||
|
field.Bytes("content"),
|
||||||
|
field.String("hash").Unique(),
|
||||||
|
field.String("signature").Unique(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Edges of the Transactions.
|
||||||
|
func (Transactions) Edges() []ent.Edge {
|
||||||
|
return []ent.Edge{
|
||||||
|
edge.From("Signer", Key.Type).Ref("Signed"),
|
||||||
|
edge.From("Block", Blocks.Type).Ref("MinedTxs"),
|
||||||
|
}
|
||||||
|
}
|
26
z2/backend/ent/schema/validators.go
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
package schema
|
||||||
|
|
||||||
|
import (
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/schema/edge"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Validators holds the schema definition for the Validators entity.
|
||||||
|
type Validators struct {
|
||||||
|
ent.Schema
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fields of the Validators.
|
||||||
|
func (Validators) Fields() []ent.Field {
|
||||||
|
return []ent.Field{
|
||||||
|
field.String("facilitator").NotEmpty(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Edges of the Validators.
|
||||||
|
func (Validators) Edges() []ent.Edge {
|
||||||
|
return []ent.Edge{
|
||||||
|
edge.To("key", Key.Type),
|
||||||
|
}
|
||||||
|
}
|
24
z2/backend/ent/schema/whitelist.go
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
package schema
|
||||||
|
|
||||||
|
import (
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/schema/edge"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Key holds the schema definition for the Key entity.
|
||||||
|
type WhiteList struct {
|
||||||
|
ent.Schema
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fields of the Key.
|
||||||
|
func (WhiteList) Fields() []ent.Field {
|
||||||
|
return []ent.Field{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Edges of the Key.
|
||||||
|
func (WhiteList) Edges() []ent.Edge {
|
||||||
|
return []ent.Edge{
|
||||||
|
edge.To("Sponsor", Validators.Type),
|
||||||
|
edge.To("Account", Key.Type),
|
||||||
|
}
|
||||||
|
}
|
202
z2/backend/ent/transactions.go
Normal file
@ -0,0 +1,202 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"thesis/ent/transactions"
|
||||||
|
|
||||||
|
"entgo.io/ent"
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Transactions is the model entity for the Transactions schema.
|
||||||
|
type Transactions struct {
|
||||||
|
config `json:"-"`
|
||||||
|
// ID of the ent.
|
||||||
|
ID int `json:"id,omitempty"`
|
||||||
|
// Type holds the value of the "type" field.
|
||||||
|
Type int `json:"type,omitempty"`
|
||||||
|
// Timestamp holds the value of the "timestamp" field.
|
||||||
|
Timestamp int `json:"timestamp,omitempty"`
|
||||||
|
// Comment holds the value of the "comment" field.
|
||||||
|
Comment string `json:"comment,omitempty"`
|
||||||
|
// Content holds the value of the "content" field.
|
||||||
|
Content []byte `json:"content,omitempty"`
|
||||||
|
// Hash holds the value of the "hash" field.
|
||||||
|
Hash string `json:"hash,omitempty"`
|
||||||
|
// Signature holds the value of the "signature" field.
|
||||||
|
Signature string `json:"signature,omitempty"`
|
||||||
|
// Edges holds the relations/edges for other nodes in the graph.
|
||||||
|
// The values are being populated by the TransactionsQuery when eager-loading is set.
|
||||||
|
Edges TransactionsEdges `json:"edges"`
|
||||||
|
selectValues sql.SelectValues
|
||||||
|
}
|
||||||
|
|
||||||
|
// TransactionsEdges holds the relations/edges for other nodes in the graph.
|
||||||
|
type TransactionsEdges struct {
|
||||||
|
// Signer holds the value of the Signer edge.
|
||||||
|
Signer []*Key `json:"Signer,omitempty"`
|
||||||
|
// Block holds the value of the Block edge.
|
||||||
|
Block []*Blocks `json:"Block,omitempty"`
|
||||||
|
// loadedTypes holds the information for reporting if a
|
||||||
|
// type was loaded (or requested) in eager-loading or not.
|
||||||
|
loadedTypes [2]bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignerOrErr returns the Signer value or an error if the edge
|
||||||
|
// was not loaded in eager-loading.
|
||||||
|
func (e TransactionsEdges) SignerOrErr() ([]*Key, error) {
|
||||||
|
if e.loadedTypes[0] {
|
||||||
|
return e.Signer, nil
|
||||||
|
}
|
||||||
|
return nil, &NotLoadedError{edge: "Signer"}
|
||||||
|
}
|
||||||
|
|
||||||
|
// BlockOrErr returns the Block value or an error if the edge
|
||||||
|
// was not loaded in eager-loading.
|
||||||
|
func (e TransactionsEdges) BlockOrErr() ([]*Blocks, error) {
|
||||||
|
if e.loadedTypes[1] {
|
||||||
|
return e.Block, nil
|
||||||
|
}
|
||||||
|
return nil, &NotLoadedError{edge: "Block"}
|
||||||
|
}
|
||||||
|
|
||||||
|
// scanValues returns the types for scanning values from sql.Rows.
|
||||||
|
func (*Transactions) scanValues(columns []string) ([]any, error) {
|
||||||
|
values := make([]any, len(columns))
|
||||||
|
for i := range columns {
|
||||||
|
switch columns[i] {
|
||||||
|
case transactions.FieldContent:
|
||||||
|
values[i] = new([]byte)
|
||||||
|
case transactions.FieldID, transactions.FieldType, transactions.FieldTimestamp:
|
||||||
|
values[i] = new(sql.NullInt64)
|
||||||
|
case transactions.FieldComment, transactions.FieldHash, transactions.FieldSignature:
|
||||||
|
values[i] = new(sql.NullString)
|
||||||
|
default:
|
||||||
|
values[i] = new(sql.UnknownType)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return values, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// assignValues assigns the values that were returned from sql.Rows (after scanning)
|
||||||
|
// to the Transactions fields.
|
||||||
|
func (t *Transactions) assignValues(columns []string, values []any) error {
|
||||||
|
if m, n := len(values), len(columns); m < n {
|
||||||
|
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
|
||||||
|
}
|
||||||
|
for i := range columns {
|
||||||
|
switch columns[i] {
|
||||||
|
case transactions.FieldID:
|
||||||
|
value, ok := values[i].(*sql.NullInt64)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field id", value)
|
||||||
|
}
|
||||||
|
t.ID = int(value.Int64)
|
||||||
|
case transactions.FieldType:
|
||||||
|
if value, ok := values[i].(*sql.NullInt64); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field type", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
t.Type = int(value.Int64)
|
||||||
|
}
|
||||||
|
case transactions.FieldTimestamp:
|
||||||
|
if value, ok := values[i].(*sql.NullInt64); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field timestamp", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
t.Timestamp = int(value.Int64)
|
||||||
|
}
|
||||||
|
case transactions.FieldComment:
|
||||||
|
if value, ok := values[i].(*sql.NullString); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field comment", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
t.Comment = value.String
|
||||||
|
}
|
||||||
|
case transactions.FieldContent:
|
||||||
|
if value, ok := values[i].(*[]byte); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field content", values[i])
|
||||||
|
} else if value != nil {
|
||||||
|
t.Content = *value
|
||||||
|
}
|
||||||
|
case transactions.FieldHash:
|
||||||
|
if value, ok := values[i].(*sql.NullString); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field hash", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
t.Hash = value.String
|
||||||
|
}
|
||||||
|
case transactions.FieldSignature:
|
||||||
|
if value, ok := values[i].(*sql.NullString); !ok {
|
||||||
|
return fmt.Errorf("unexpected type %T for field signature", values[i])
|
||||||
|
} else if value.Valid {
|
||||||
|
t.Signature = value.String
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
t.selectValues.Set(columns[i], values[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Value returns the ent.Value that was dynamically selected and assigned to the Transactions.
|
||||||
|
// This includes values selected through modifiers, order, etc.
|
||||||
|
func (t *Transactions) Value(name string) (ent.Value, error) {
|
||||||
|
return t.selectValues.Get(name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// QuerySigner queries the "Signer" edge of the Transactions entity.
|
||||||
|
func (t *Transactions) QuerySigner() *KeyQuery {
|
||||||
|
return NewTransactionsClient(t.config).QuerySigner(t)
|
||||||
|
}
|
||||||
|
|
||||||
|
// QueryBlock queries the "Block" edge of the Transactions entity.
|
||||||
|
func (t *Transactions) QueryBlock() *BlocksQuery {
|
||||||
|
return NewTransactionsClient(t.config).QueryBlock(t)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update returns a builder for updating this Transactions.
|
||||||
|
// Note that you need to call Transactions.Unwrap() before calling this method if this Transactions
|
||||||
|
// was returned from a transaction, and the transaction was committed or rolled back.
|
||||||
|
func (t *Transactions) Update() *TransactionsUpdateOne {
|
||||||
|
return NewTransactionsClient(t.config).UpdateOne(t)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unwrap unwraps the Transactions entity that was returned from a transaction after it was closed,
|
||||||
|
// so that all future queries will be executed through the driver which created the transaction.
|
||||||
|
func (t *Transactions) Unwrap() *Transactions {
|
||||||
|
_tx, ok := t.config.driver.(*txDriver)
|
||||||
|
if !ok {
|
||||||
|
panic("ent: Transactions is not a transactional entity")
|
||||||
|
}
|
||||||
|
t.config.driver = _tx.drv
|
||||||
|
return t
|
||||||
|
}
|
||||||
|
|
||||||
|
// String implements the fmt.Stringer.
|
||||||
|
func (t *Transactions) String() string {
|
||||||
|
var builder strings.Builder
|
||||||
|
builder.WriteString("Transactions(")
|
||||||
|
builder.WriteString(fmt.Sprintf("id=%v, ", t.ID))
|
||||||
|
builder.WriteString("type=")
|
||||||
|
builder.WriteString(fmt.Sprintf("%v", t.Type))
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("timestamp=")
|
||||||
|
builder.WriteString(fmt.Sprintf("%v", t.Timestamp))
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("comment=")
|
||||||
|
builder.WriteString(t.Comment)
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("content=")
|
||||||
|
builder.WriteString(fmt.Sprintf("%v", t.Content))
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("hash=")
|
||||||
|
builder.WriteString(t.Hash)
|
||||||
|
builder.WriteString(", ")
|
||||||
|
builder.WriteString("signature=")
|
||||||
|
builder.WriteString(t.Signature)
|
||||||
|
builder.WriteByte(')')
|
||||||
|
return builder.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// TransactionsSlice is a parsable slice of Transactions.
|
||||||
|
type TransactionsSlice []*Transactions
|
148
z2/backend/ent/transactions/transactions.go
Normal file
@ -0,0 +1,148 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package transactions
|
||||||
|
|
||||||
|
import (
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// Label holds the string label denoting the transactions type in the database.
|
||||||
|
Label = "transactions"
|
||||||
|
// FieldID holds the string denoting the id field in the database.
|
||||||
|
FieldID = "id"
|
||||||
|
// FieldType holds the string denoting the type field in the database.
|
||||||
|
FieldType = "type"
|
||||||
|
// FieldTimestamp holds the string denoting the timestamp field in the database.
|
||||||
|
FieldTimestamp = "timestamp"
|
||||||
|
// FieldComment holds the string denoting the comment field in the database.
|
||||||
|
FieldComment = "comment"
|
||||||
|
// FieldContent holds the string denoting the content field in the database.
|
||||||
|
FieldContent = "content"
|
||||||
|
// FieldHash holds the string denoting the hash field in the database.
|
||||||
|
FieldHash = "hash"
|
||||||
|
// FieldSignature holds the string denoting the signature field in the database.
|
||||||
|
FieldSignature = "signature"
|
||||||
|
// EdgeSigner holds the string denoting the signer edge name in mutations.
|
||||||
|
EdgeSigner = "Signer"
|
||||||
|
// EdgeBlock holds the string denoting the block edge name in mutations.
|
||||||
|
EdgeBlock = "Block"
|
||||||
|
// Table holds the table name of the transactions in the database.
|
||||||
|
Table = "transactions"
|
||||||
|
// SignerTable is the table that holds the Signer relation/edge. The primary key declared below.
|
||||||
|
SignerTable = "key_Signed"
|
||||||
|
// SignerInverseTable is the table name for the Key entity.
|
||||||
|
// It exists in this package in order to avoid circular dependency with the "key" package.
|
||||||
|
SignerInverseTable = "keys"
|
||||||
|
// BlockTable is the table that holds the Block relation/edge. The primary key declared below.
|
||||||
|
BlockTable = "blocks_MinedTxs"
|
||||||
|
// BlockInverseTable is the table name for the Blocks entity.
|
||||||
|
// It exists in this package in order to avoid circular dependency with the "blocks" package.
|
||||||
|
BlockInverseTable = "blocks"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Columns holds all SQL columns for transactions fields.
|
||||||
|
var Columns = []string{
|
||||||
|
FieldID,
|
||||||
|
FieldType,
|
||||||
|
FieldTimestamp,
|
||||||
|
FieldComment,
|
||||||
|
FieldContent,
|
||||||
|
FieldHash,
|
||||||
|
FieldSignature,
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
// SignerPrimaryKey and SignerColumn2 are the table columns denoting the
|
||||||
|
// primary key for the Signer relation (M2M).
|
||||||
|
SignerPrimaryKey = []string{"key_id", "transactions_id"}
|
||||||
|
// BlockPrimaryKey and BlockColumn2 are the table columns denoting the
|
||||||
|
// primary key for the Block relation (M2M).
|
||||||
|
BlockPrimaryKey = []string{"blocks_id", "transactions_id"}
|
||||||
|
)
|
||||||
|
|
||||||
|
// ValidColumn reports if the column name is valid (part of the table columns).
|
||||||
|
func ValidColumn(column string) bool {
|
||||||
|
for i := range Columns {
|
||||||
|
if column == Columns[i] {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// OrderOption defines the ordering options for the Transactions queries.
|
||||||
|
type OrderOption func(*sql.Selector)
|
||||||
|
|
||||||
|
// ByID orders the results by the id field.
|
||||||
|
func ByID(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldID, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByType orders the results by the type field.
|
||||||
|
func ByType(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldType, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByTimestamp orders the results by the timestamp field.
|
||||||
|
func ByTimestamp(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldTimestamp, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByComment orders the results by the comment field.
|
||||||
|
func ByComment(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldComment, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByHash orders the results by the hash field.
|
||||||
|
func ByHash(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldHash, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// BySignature orders the results by the signature field.
|
||||||
|
func BySignature(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return sql.OrderByField(FieldSignature, opts...).ToFunc()
|
||||||
|
}
|
||||||
|
|
||||||
|
// BySignerCount orders the results by Signer count.
|
||||||
|
func BySignerCount(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborsCount(s, newSignerStep(), opts...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// BySigner orders the results by Signer terms.
|
||||||
|
func BySigner(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborTerms(s, newSignerStep(), append([]sql.OrderTerm{term}, terms...)...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByBlockCount orders the results by Block count.
|
||||||
|
func ByBlockCount(opts ...sql.OrderTermOption) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborsCount(s, newBlockStep(), opts...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ByBlock orders the results by Block terms.
|
||||||
|
func ByBlock(term sql.OrderTerm, terms ...sql.OrderTerm) OrderOption {
|
||||||
|
return func(s *sql.Selector) {
|
||||||
|
sqlgraph.OrderByNeighborTerms(s, newBlockStep(), append([]sql.OrderTerm{term}, terms...)...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
func newSignerStep() *sqlgraph.Step {
|
||||||
|
return sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.To(SignerInverseTable, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2M, true, SignerTable, SignerPrimaryKey...),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
func newBlockStep() *sqlgraph.Step {
|
||||||
|
return sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.To(BlockInverseTable, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2M, true, BlockTable, BlockPrimaryKey...),
|
||||||
|
)
|
||||||
|
}
|
461
z2/backend/ent/transactions/where.go
Normal file
@ -0,0 +1,461 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package transactions
|
||||||
|
|
||||||
|
import (
|
||||||
|
"thesis/ent/predicate"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql"
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ID filters vertices based on their ID field.
|
||||||
|
func ID(id int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDEQ applies the EQ predicate on the ID field.
|
||||||
|
func IDEQ(id int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDNEQ applies the NEQ predicate on the ID field.
|
||||||
|
func IDNEQ(id int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNEQ(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDIn applies the In predicate on the ID field.
|
||||||
|
func IDIn(ids ...int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldIn(FieldID, ids...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDNotIn applies the NotIn predicate on the ID field.
|
||||||
|
func IDNotIn(ids ...int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNotIn(FieldID, ids...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDGT applies the GT predicate on the ID field.
|
||||||
|
func IDGT(id int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGT(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDGTE applies the GTE predicate on the ID field.
|
||||||
|
func IDGTE(id int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGTE(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDLT applies the LT predicate on the ID field.
|
||||||
|
func IDLT(id int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLT(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDLTE applies the LTE predicate on the ID field.
|
||||||
|
func IDLTE(id int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLTE(FieldID, id))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Type applies equality check predicate on the "type" field. It's identical to TypeEQ.
|
||||||
|
func Type(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldType, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Timestamp applies equality check predicate on the "timestamp" field. It's identical to TimestampEQ.
|
||||||
|
func Timestamp(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldTimestamp, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Comment applies equality check predicate on the "comment" field. It's identical to CommentEQ.
|
||||||
|
func Comment(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldComment, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Content applies equality check predicate on the "content" field. It's identical to ContentEQ.
|
||||||
|
func Content(v []byte) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hash applies equality check predicate on the "hash" field. It's identical to HashEQ.
|
||||||
|
func Hash(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Signature applies equality check predicate on the "signature" field. It's identical to SignatureEQ.
|
||||||
|
func Signature(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldSignature, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TypeEQ applies the EQ predicate on the "type" field.
|
||||||
|
func TypeEQ(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldType, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TypeNEQ applies the NEQ predicate on the "type" field.
|
||||||
|
func TypeNEQ(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNEQ(FieldType, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TypeIn applies the In predicate on the "type" field.
|
||||||
|
func TypeIn(vs ...int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldIn(FieldType, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TypeNotIn applies the NotIn predicate on the "type" field.
|
||||||
|
func TypeNotIn(vs ...int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNotIn(FieldType, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TypeGT applies the GT predicate on the "type" field.
|
||||||
|
func TypeGT(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGT(FieldType, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TypeGTE applies the GTE predicate on the "type" field.
|
||||||
|
func TypeGTE(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGTE(FieldType, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TypeLT applies the LT predicate on the "type" field.
|
||||||
|
func TypeLT(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLT(FieldType, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TypeLTE applies the LTE predicate on the "type" field.
|
||||||
|
func TypeLTE(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLTE(FieldType, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TimestampEQ applies the EQ predicate on the "timestamp" field.
|
||||||
|
func TimestampEQ(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldTimestamp, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TimestampNEQ applies the NEQ predicate on the "timestamp" field.
|
||||||
|
func TimestampNEQ(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNEQ(FieldTimestamp, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TimestampIn applies the In predicate on the "timestamp" field.
|
||||||
|
func TimestampIn(vs ...int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldIn(FieldTimestamp, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TimestampNotIn applies the NotIn predicate on the "timestamp" field.
|
||||||
|
func TimestampNotIn(vs ...int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNotIn(FieldTimestamp, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TimestampGT applies the GT predicate on the "timestamp" field.
|
||||||
|
func TimestampGT(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGT(FieldTimestamp, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TimestampGTE applies the GTE predicate on the "timestamp" field.
|
||||||
|
func TimestampGTE(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGTE(FieldTimestamp, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TimestampLT applies the LT predicate on the "timestamp" field.
|
||||||
|
func TimestampLT(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLT(FieldTimestamp, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// TimestampLTE applies the LTE predicate on the "timestamp" field.
|
||||||
|
func TimestampLTE(v int) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLTE(FieldTimestamp, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentEQ applies the EQ predicate on the "comment" field.
|
||||||
|
func CommentEQ(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldComment, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentNEQ applies the NEQ predicate on the "comment" field.
|
||||||
|
func CommentNEQ(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNEQ(FieldComment, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentIn applies the In predicate on the "comment" field.
|
||||||
|
func CommentIn(vs ...string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldIn(FieldComment, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentNotIn applies the NotIn predicate on the "comment" field.
|
||||||
|
func CommentNotIn(vs ...string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNotIn(FieldComment, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentGT applies the GT predicate on the "comment" field.
|
||||||
|
func CommentGT(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGT(FieldComment, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentGTE applies the GTE predicate on the "comment" field.
|
||||||
|
func CommentGTE(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGTE(FieldComment, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentLT applies the LT predicate on the "comment" field.
|
||||||
|
func CommentLT(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLT(FieldComment, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentLTE applies the LTE predicate on the "comment" field.
|
||||||
|
func CommentLTE(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLTE(FieldComment, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentContains applies the Contains predicate on the "comment" field.
|
||||||
|
func CommentContains(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldContains(FieldComment, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentHasPrefix applies the HasPrefix predicate on the "comment" field.
|
||||||
|
func CommentHasPrefix(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldHasPrefix(FieldComment, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentHasSuffix applies the HasSuffix predicate on the "comment" field.
|
||||||
|
func CommentHasSuffix(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldHasSuffix(FieldComment, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentEqualFold applies the EqualFold predicate on the "comment" field.
|
||||||
|
func CommentEqualFold(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEqualFold(FieldComment, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// CommentContainsFold applies the ContainsFold predicate on the "comment" field.
|
||||||
|
func CommentContainsFold(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldContainsFold(FieldComment, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentEQ applies the EQ predicate on the "content" field.
|
||||||
|
func ContentEQ(v []byte) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentNEQ applies the NEQ predicate on the "content" field.
|
||||||
|
func ContentNEQ(v []byte) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNEQ(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentIn applies the In predicate on the "content" field.
|
||||||
|
func ContentIn(vs ...[]byte) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldIn(FieldContent, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentNotIn applies the NotIn predicate on the "content" field.
|
||||||
|
func ContentNotIn(vs ...[]byte) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNotIn(FieldContent, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentGT applies the GT predicate on the "content" field.
|
||||||
|
func ContentGT(v []byte) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGT(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentGTE applies the GTE predicate on the "content" field.
|
||||||
|
func ContentGTE(v []byte) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGTE(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentLT applies the LT predicate on the "content" field.
|
||||||
|
func ContentLT(v []byte) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLT(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContentLTE applies the LTE predicate on the "content" field.
|
||||||
|
func ContentLTE(v []byte) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLTE(FieldContent, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashEQ applies the EQ predicate on the "hash" field.
|
||||||
|
func HashEQ(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashNEQ applies the NEQ predicate on the "hash" field.
|
||||||
|
func HashNEQ(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNEQ(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashIn applies the In predicate on the "hash" field.
|
||||||
|
func HashIn(vs ...string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldIn(FieldHash, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashNotIn applies the NotIn predicate on the "hash" field.
|
||||||
|
func HashNotIn(vs ...string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNotIn(FieldHash, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashGT applies the GT predicate on the "hash" field.
|
||||||
|
func HashGT(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGT(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashGTE applies the GTE predicate on the "hash" field.
|
||||||
|
func HashGTE(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGTE(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashLT applies the LT predicate on the "hash" field.
|
||||||
|
func HashLT(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLT(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashLTE applies the LTE predicate on the "hash" field.
|
||||||
|
func HashLTE(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLTE(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashContains applies the Contains predicate on the "hash" field.
|
||||||
|
func HashContains(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldContains(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashHasPrefix applies the HasPrefix predicate on the "hash" field.
|
||||||
|
func HashHasPrefix(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldHasPrefix(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashHasSuffix applies the HasSuffix predicate on the "hash" field.
|
||||||
|
func HashHasSuffix(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldHasSuffix(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashEqualFold applies the EqualFold predicate on the "hash" field.
|
||||||
|
func HashEqualFold(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEqualFold(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HashContainsFold applies the ContainsFold predicate on the "hash" field.
|
||||||
|
func HashContainsFold(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldContainsFold(FieldHash, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureEQ applies the EQ predicate on the "signature" field.
|
||||||
|
func SignatureEQ(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEQ(FieldSignature, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureNEQ applies the NEQ predicate on the "signature" field.
|
||||||
|
func SignatureNEQ(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNEQ(FieldSignature, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureIn applies the In predicate on the "signature" field.
|
||||||
|
func SignatureIn(vs ...string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldIn(FieldSignature, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureNotIn applies the NotIn predicate on the "signature" field.
|
||||||
|
func SignatureNotIn(vs ...string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldNotIn(FieldSignature, vs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureGT applies the GT predicate on the "signature" field.
|
||||||
|
func SignatureGT(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGT(FieldSignature, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureGTE applies the GTE predicate on the "signature" field.
|
||||||
|
func SignatureGTE(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldGTE(FieldSignature, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureLT applies the LT predicate on the "signature" field.
|
||||||
|
func SignatureLT(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLT(FieldSignature, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureLTE applies the LTE predicate on the "signature" field.
|
||||||
|
func SignatureLTE(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldLTE(FieldSignature, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureContains applies the Contains predicate on the "signature" field.
|
||||||
|
func SignatureContains(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldContains(FieldSignature, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureHasPrefix applies the HasPrefix predicate on the "signature" field.
|
||||||
|
func SignatureHasPrefix(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldHasPrefix(FieldSignature, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureHasSuffix applies the HasSuffix predicate on the "signature" field.
|
||||||
|
func SignatureHasSuffix(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldHasSuffix(FieldSignature, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureEqualFold applies the EqualFold predicate on the "signature" field.
|
||||||
|
func SignatureEqualFold(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldEqualFold(FieldSignature, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignatureContainsFold applies the ContainsFold predicate on the "signature" field.
|
||||||
|
func SignatureContainsFold(v string) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.FieldContainsFold(FieldSignature, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasSigner applies the HasEdge predicate on the "Signer" edge.
|
||||||
|
func HasSigner() predicate.Transactions {
|
||||||
|
return predicate.Transactions(func(s *sql.Selector) {
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2M, true, SignerTable, SignerPrimaryKey...),
|
||||||
|
)
|
||||||
|
sqlgraph.HasNeighbors(s, step)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasSignerWith applies the HasEdge predicate on the "Signer" edge with a given conditions (other predicates).
|
||||||
|
func HasSignerWith(preds ...predicate.Key) predicate.Transactions {
|
||||||
|
return predicate.Transactions(func(s *sql.Selector) {
|
||||||
|
step := newSignerStep()
|
||||||
|
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
|
||||||
|
for _, p := range preds {
|
||||||
|
p(s)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasBlock applies the HasEdge predicate on the "Block" edge.
|
||||||
|
func HasBlock() predicate.Transactions {
|
||||||
|
return predicate.Transactions(func(s *sql.Selector) {
|
||||||
|
step := sqlgraph.NewStep(
|
||||||
|
sqlgraph.From(Table, FieldID),
|
||||||
|
sqlgraph.Edge(sqlgraph.M2M, true, BlockTable, BlockPrimaryKey...),
|
||||||
|
)
|
||||||
|
sqlgraph.HasNeighbors(s, step)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// HasBlockWith applies the HasEdge predicate on the "Block" edge with a given conditions (other predicates).
|
||||||
|
func HasBlockWith(preds ...predicate.Blocks) predicate.Transactions {
|
||||||
|
return predicate.Transactions(func(s *sql.Selector) {
|
||||||
|
step := newBlockStep()
|
||||||
|
sqlgraph.HasNeighborsWith(s, step, func(s *sql.Selector) {
|
||||||
|
for _, p := range preds {
|
||||||
|
p(s)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// And groups predicates with the AND operator between them.
|
||||||
|
func And(predicates ...predicate.Transactions) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.AndPredicates(predicates...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Or groups predicates with the OR operator between them.
|
||||||
|
func Or(predicates ...predicate.Transactions) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.OrPredicates(predicates...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Not applies the not operator on the given predicate.
|
||||||
|
func Not(p predicate.Transactions) predicate.Transactions {
|
||||||
|
return predicate.Transactions(sql.NotPredicates(p))
|
||||||
|
}
|
312
z2/backend/ent/transactions_create.go
Normal file
@ -0,0 +1,312 @@
|
|||||||
|
// Code generated by ent, DO NOT EDIT.
|
||||||
|
|
||||||
|
package ent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"thesis/ent/blocks"
|
||||||
|
"thesis/ent/key"
|
||||||
|
"thesis/ent/transactions"
|
||||||
|
|
||||||
|
"entgo.io/ent/dialect/sql/sqlgraph"
|
||||||
|
"entgo.io/ent/schema/field"
|
||||||
|
)
|
||||||
|
|
||||||
|
// TransactionsCreate is the builder for creating a Transactions entity.
|
||||||
|
type TransactionsCreate struct {
|
||||||
|
config
|
||||||
|
mutation *TransactionsMutation
|
||||||
|
hooks []Hook
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetType sets the "type" field.
|
||||||
|
func (tc *TransactionsCreate) SetType(i int) *TransactionsCreate {
|
||||||
|
tc.mutation.SetType(i)
|
||||||
|
return tc
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetTimestamp sets the "timestamp" field.
|
||||||
|
func (tc *TransactionsCreate) SetTimestamp(i int) *TransactionsCreate {
|
||||||
|
tc.mutation.SetTimestamp(i)
|
||||||
|
return tc
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetComment sets the "comment" field.
|
||||||
|
func (tc *TransactionsCreate) SetComment(s string) *TransactionsCreate {
|
||||||
|
tc.mutation.SetComment(s)
|
||||||
|
return tc
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetContent sets the "content" field.
|
||||||
|
func (tc *TransactionsCreate) SetContent(b []byte) *TransactionsCreate {
|
||||||
|
tc.mutation.SetContent(b)
|
||||||
|
return tc
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetHash sets the "hash" field.
|
||||||
|
func (tc *TransactionsCreate) SetHash(s string) *TransactionsCreate {
|
||||||
|
tc.mutation.SetHash(s)
|
||||||
|
return tc
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetSignature sets the "signature" field.
|
||||||
|
func (tc *TransactionsCreate) SetSignature(s string) *TransactionsCreate {
|
||||||
|
tc.mutation.SetSignature(s)
|
||||||
|
return tc
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddSignerIDs adds the "Signer" edge to the Key entity by IDs.
|
||||||
|
func (tc *TransactionsCreate) AddSignerIDs(ids ...int) *TransactionsCreate {
|
||||||
|
tc.mutation.AddSignerIDs(ids...)
|
||||||
|
return tc
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddSigner adds the "Signer" edges to the Key entity.
|
||||||
|
func (tc *TransactionsCreate) AddSigner(k ...*Key) *TransactionsCreate {
|
||||||
|
ids := make([]int, len(k))
|
||||||
|
for i := range k {
|
||||||
|
ids[i] = k[i].ID
|
||||||
|
}
|
||||||
|
return tc.AddSignerIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddBlockIDs adds the "Block" edge to the Blocks entity by IDs.
|
||||||
|
func (tc *TransactionsCreate) AddBlockIDs(ids ...int) *TransactionsCreate {
|
||||||
|
tc.mutation.AddBlockIDs(ids...)
|
||||||
|
return tc
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddBlock adds the "Block" edges to the Blocks entity.
|
||||||
|
func (tc *TransactionsCreate) AddBlock(b ...*Blocks) *TransactionsCreate {
|
||||||
|
ids := make([]int, len(b))
|
||||||
|
for i := range b {
|
||||||
|
ids[i] = b[i].ID
|
||||||
|
}
|
||||||
|
return tc.AddBlockIDs(ids...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mutation returns the TransactionsMutation object of the builder.
|
||||||
|
func (tc *TransactionsCreate) Mutation() *TransactionsMutation {
|
||||||
|
return tc.mutation
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save creates the Transactions in the database.
|
||||||
|
func (tc *TransactionsCreate) Save(ctx context.Context) (*Transactions, error) {
|
||||||
|
return withHooks(ctx, tc.sqlSave, tc.mutation, tc.hooks)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX calls Save and panics if Save returns an error.
|
||||||
|
func (tc *TransactionsCreate) SaveX(ctx context.Context) *Transactions {
|
||||||
|
v, err := tc.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (tc *TransactionsCreate) Exec(ctx context.Context) error {
|
||||||
|
_, err := tc.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (tc *TransactionsCreate) ExecX(ctx context.Context) {
|
||||||
|
if err := tc.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// check runs all checks and user-defined validators on the builder.
|
||||||
|
func (tc *TransactionsCreate) check() error {
|
||||||
|
if _, ok := tc.mutation.GetType(); !ok {
|
||||||
|
return &ValidationError{Name: "type", err: errors.New(`ent: missing required field "Transactions.type"`)}
|
||||||
|
}
|
||||||
|
if _, ok := tc.mutation.Timestamp(); !ok {
|
||||||
|
return &ValidationError{Name: "timestamp", err: errors.New(`ent: missing required field "Transactions.timestamp"`)}
|
||||||
|
}
|
||||||
|
if _, ok := tc.mutation.Comment(); !ok {
|
||||||
|
return &ValidationError{Name: "comment", err: errors.New(`ent: missing required field "Transactions.comment"`)}
|
||||||
|
}
|
||||||
|
if _, ok := tc.mutation.Content(); !ok {
|
||||||
|
return &ValidationError{Name: "content", err: errors.New(`ent: missing required field "Transactions.content"`)}
|
||||||
|
}
|
||||||
|
if _, ok := tc.mutation.Hash(); !ok {
|
||||||
|
return &ValidationError{Name: "hash", err: errors.New(`ent: missing required field "Transactions.hash"`)}
|
||||||
|
}
|
||||||
|
if _, ok := tc.mutation.Signature(); !ok {
|
||||||
|
return &ValidationError{Name: "signature", err: errors.New(`ent: missing required field "Transactions.signature"`)}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (tc *TransactionsCreate) sqlSave(ctx context.Context) (*Transactions, error) {
|
||||||
|
if err := tc.check(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
_node, _spec := tc.createSpec()
|
||||||
|
if err := sqlgraph.CreateNode(ctx, tc.driver, _spec); err != nil {
|
||||||
|
if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
id := _spec.ID.Value.(int64)
|
||||||
|
_node.ID = int(id)
|
||||||
|
tc.mutation.id = &_node.ID
|
||||||
|
tc.mutation.done = true
|
||||||
|
return _node, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (tc *TransactionsCreate) createSpec() (*Transactions, *sqlgraph.CreateSpec) {
|
||||||
|
var (
|
||||||
|
_node = &Transactions{config: tc.config}
|
||||||
|
_spec = sqlgraph.NewCreateSpec(transactions.Table, sqlgraph.NewFieldSpec(transactions.FieldID, field.TypeInt))
|
||||||
|
)
|
||||||
|
if value, ok := tc.mutation.GetType(); ok {
|
||||||
|
_spec.SetField(transactions.FieldType, field.TypeInt, value)
|
||||||
|
_node.Type = value
|
||||||
|
}
|
||||||
|
if value, ok := tc.mutation.Timestamp(); ok {
|
||||||
|
_spec.SetField(transactions.FieldTimestamp, field.TypeInt, value)
|
||||||
|
_node.Timestamp = value
|
||||||
|
}
|
||||||
|
if value, ok := tc.mutation.Comment(); ok {
|
||||||
|
_spec.SetField(transactions.FieldComment, field.TypeString, value)
|
||||||
|
_node.Comment = value
|
||||||
|
}
|
||||||
|
if value, ok := tc.mutation.Content(); ok {
|
||||||
|
_spec.SetField(transactions.FieldContent, field.TypeBytes, value)
|
||||||
|
_node.Content = value
|
||||||
|
}
|
||||||
|
if value, ok := tc.mutation.Hash(); ok {
|
||||||
|
_spec.SetField(transactions.FieldHash, field.TypeString, value)
|
||||||
|
_node.Hash = value
|
||||||
|
}
|
||||||
|
if value, ok := tc.mutation.Signature(); ok {
|
||||||
|
_spec.SetField(transactions.FieldSignature, field.TypeString, value)
|
||||||
|
_node.Signature = value
|
||||||
|
}
|
||||||
|
if nodes := tc.mutation.SignerIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: true,
|
||||||
|
Table: transactions.SignerTable,
|
||||||
|
Columns: transactions.SignerPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(key.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges = append(_spec.Edges, edge)
|
||||||
|
}
|
||||||
|
if nodes := tc.mutation.BlockIDs(); len(nodes) > 0 {
|
||||||
|
edge := &sqlgraph.EdgeSpec{
|
||||||
|
Rel: sqlgraph.M2M,
|
||||||
|
Inverse: true,
|
||||||
|
Table: transactions.BlockTable,
|
||||||
|
Columns: transactions.BlockPrimaryKey,
|
||||||
|
Bidi: false,
|
||||||
|
Target: &sqlgraph.EdgeTarget{
|
||||||
|
IDSpec: sqlgraph.NewFieldSpec(blocks.FieldID, field.TypeInt),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, k := range nodes {
|
||||||
|
edge.Target.Nodes = append(edge.Target.Nodes, k)
|
||||||
|
}
|
||||||
|
_spec.Edges = append(_spec.Edges, edge)
|
||||||
|
}
|
||||||
|
return _node, _spec
|
||||||
|
}
|
||||||
|
|
||||||
|
// TransactionsCreateBulk is the builder for creating many Transactions entities in bulk.
|
||||||
|
type TransactionsCreateBulk struct {
|
||||||
|
config
|
||||||
|
err error
|
||||||
|
builders []*TransactionsCreate
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save creates the Transactions entities in the database.
|
||||||
|
func (tcb *TransactionsCreateBulk) Save(ctx context.Context) ([]*Transactions, error) {
|
||||||
|
if tcb.err != nil {
|
||||||
|
return nil, tcb.err
|
||||||
|
}
|
||||||
|
specs := make([]*sqlgraph.CreateSpec, len(tcb.builders))
|
||||||
|
nodes := make([]*Transactions, len(tcb.builders))
|
||||||
|
mutators := make([]Mutator, len(tcb.builders))
|
||||||
|
for i := range tcb.builders {
|
||||||
|
func(i int, root context.Context) {
|
||||||
|
builder := tcb.builders[i]
|
||||||
|
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
|
||||||
|
mutation, ok := m.(*TransactionsMutation)
|
||||||
|
if !ok {
|
||||||
|
return nil, fmt.Errorf("unexpected mutation type %T", m)
|
||||||
|
}
|
||||||
|
if err := builder.check(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
builder.mutation = mutation
|
||||||
|
var err error
|
||||||
|
nodes[i], specs[i] = builder.createSpec()
|
||||||
|
if i < len(mutators)-1 {
|
||||||
|
_, err = mutators[i+1].Mutate(root, tcb.builders[i+1].mutation)
|
||||||
|
} else {
|
||||||
|
spec := &sqlgraph.BatchCreateSpec{Nodes: specs}
|
||||||
|
// Invoke the actual operation on the latest mutation in the chain.
|
||||||
|
if err = sqlgraph.BatchCreate(ctx, tcb.driver, spec); err != nil {
|
||||||
|
if sqlgraph.IsConstraintError(err) {
|
||||||
|
err = &ConstraintError{msg: err.Error(), wrap: err}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
mutation.id = &nodes[i].ID
|
||||||
|
if specs[i].ID.Value != nil {
|
||||||
|
id := specs[i].ID.Value.(int64)
|
||||||
|
nodes[i].ID = int(id)
|
||||||
|
}
|
||||||
|
mutation.done = true
|
||||||
|
return nodes[i], nil
|
||||||
|
})
|
||||||
|
for i := len(builder.hooks) - 1; i >= 0; i-- {
|
||||||
|
mut = builder.hooks[i](mut)
|
||||||
|
}
|
||||||
|
mutators[i] = mut
|
||||||
|
}(i, ctx)
|
||||||
|
}
|
||||||
|
if len(mutators) > 0 {
|
||||||
|
if _, err := mutators[0].Mutate(ctx, tcb.builders[0].mutation); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nodes, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveX is like Save, but panics if an error occurs.
|
||||||
|
func (tcb *TransactionsCreateBulk) SaveX(ctx context.Context) []*Transactions {
|
||||||
|
v, err := tcb.Save(ctx)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
|
||||||
|
// Exec executes the query.
|
||||||
|
func (tcb *TransactionsCreateBulk) Exec(ctx context.Context) error {
|
||||||
|
_, err := tcb.Save(ctx)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecX is like Exec, but panics if an error occurs.
|
||||||
|
func (tcb *TransactionsCreateBulk) ExecX(ctx context.Context) {
|
||||||
|
if err := tcb.Exec(ctx); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|