Using encrypted data for machine learning without decrypting it
This article discusses advanced cryptographic techniques. This is just an overview of research conducted by Julia Computing. Do not use the examples given here in commercial applications. Always consult with cryptographers before applying cryptography.
Here you can download the package that implements all the magic, and here is the code that is discussed in the article.
Introduction
Let's say you just developed a cool new machine learning model (of course, using Flux.jl ). And now you want to start deploying it for your users. How will you do this? Probably the easiest way is to give the model to users and let it run locally on their data. But this approach has disadvantages:
- Machine learning models are large, and user computers may not have enough computing or disk resources.
- Machine learning models are often updated, and it may not be convenient for you to regularly send large amounts of data over the network.
- Model development is time-consuming and requires a large amount of computing resources. And you may want compensation for this in the form of a fee for using your model.
Then they usually recall that the model can be provided in the cloud through the API. Over the past few years, many such services have appeared; each large cloud platform offers similar services to corporate developers. But potential users are faced with an obvious dilemma: now their data is processed on a remote server, which may not be trustworthy. This has clear ethical and legal implications that limit the use of such services. In regulated industries, especially healthcare and financial services, it is often not possible to send patient and client data for processing to third parties.
There are other options?
It turns out there is! Recent discoveries in cryptography allow computing with data without decoding it . For example, a user sends encrypted data (say, images) to a cloud API that launches a machine learning model, and then sends an encrypted response. At no stage is the data decrypted, the cloud provider does not gain access to the source images and cannot decrypt the calculated forecast. How is this possible? Let’s find out on the example of creating a service for handwriting recognition on encrypted images from the MNIST dataset.
About homomorphic encryption
The ability to perform calculations with encrypted data is commonly referred to as “secure computing”. This is a large area for research, with numerous approaches to cryptography depending on all kinds of application scenarios. We will focus on a technique called “homomorphic encryption”. In such a system, the following operations are usually available to us:
pub_key, eval_key, priv_key = keygen()
-
encrypted = encrypt(pub_key, plaintext)
-
decrypted = decrypt(priv_key, encrypted)
-
encrypted′ = eval(eval_key, f, encrypted)
The first three operations are simple and familiar to everyone who has already used any asymmetric encryption algorithms (for example, if you connected via TLS). All magic happens in the last operation. When encrypting, it evaluates the function
f
and returns another encrypted value calculated in accordance with the result of evaluating
f
on the encrypted value. This feature gave this approach its name. The assessment is related to the encryption operation:
f(decrypt(priv_key, encrypted)) == decrypt(priv_key, eval(eval_key, f, encrypted))
Similarly, using an encrypted value, we can evaluate arbitrary homomorphisms
f
.
Which functions
f
supported depends on cryptographic schemes and supported operations. If only one
f
supported (for example,
f = +
), then the circuit is called “partially homomorphic”. If
f
can be any complete set of gateways, on the basis of which arbitrary schemes can be created, then with a limited size of the scheme this is called another kind of partially homomorphic calculation - "somewhat homomorphic", and with unlimited size - "completely homomorphic" calculation. You can turn “in some way” into a completely homomorphic encryption using the bootstrapping technique, but this is beyond the scope of our article. Fully homomorphic encryption is a relatively recent discovery, the first working scheme (albeit impractical) was published by Craig Gentry in 2009 . There are a number of later (and practical) completely homomorphic schemes. There are also software packages that qualitatively implement these schemes. Most often they use Microsoft SEAL and PALISADE . In addition, I recently opened the implementation code for these Pure Julia algorithms. For this article, we will use the CKKS encryption implemented in it.
CKS Overview
CKKS (according to the names of the authors of the scientific work Cheon-Kim-Kim-Song, who proposed the algorithm in 2016) is a homomorphic encryption scheme that allows homomorphic evaluation of the following primitive operations:
- The elementwise addition of the lengths of
n
vectors of complex numbers.
- Element-wise multiplication of the lengths of
n
complex vectors.
- Rotate (in the context of
circshift
) elements in a vector.
- Integrated pairing of vector elements.
The parameter
n
depends on the desired level of security and accuracy, and is usually quite high. In our example, it will be equal to 4096 (a higher value increases security, but is also harder in calculations, it scales approximately like
n log n
).
In addition, calculations using CKKS are noisy . Therefore, the results are approximate, and care must be taken to ensure that the results are evaluated with sufficient accuracy so as not to affect the correctness of the result.
On the other hand, such restrictions are not unusual for machine learning package developers. Special accelerators like GPUs also usually operate with number vectors. In addition, for many developers, floating point numbers sometimes look noisy due to the influence of selection algorithms, multithreading, and so on. I want to emphasize that the key difference here is that arithmetic calculations with floating point numbers are initially deterministic, even if this is not obvious due to the complexity of the implementation, although the CKKS primitives are really noisy. But perhaps this allows users to understand that the noise is not as scary as it might seem.
Now let's see how you can perform these operations in Julia (note: very unsafe parameters are selected, with these operations we only illustrate the use of the library in REPL).
julia> using ToyFHE # Let's play with 8 element vectors julia> N = 8; # Choose some parameters - we'll talk about it later julia> ℛ = NegacyclicRing(2N, (40, 40, 40)) ℤ₁₃₂₉₂₂₇₉₉₇₅₆₈₀₈₁₄₅₇₄₀₂₇₀₁₂₀₇₁₀₄₂₄₈₂₅₇/(x¹⁶ + 1) # We'll use CKKS julia> params = CKKSParams(ℛ) CKKS parameters # We need to pick a scaling factor for a numbers - again we'll talk about that later julia> Tscale = FixedRational{2^40} FixedRational{1099511627776,T} where T # Let's start with a plain Vector of zeros julia> plain = CKKSEncoding{Tscale}(zero(ℛ)) 8-element CKKSEncoding{FixedRational{1099511627776,T} where T} with indices 0:7: 0.0 + 0.0im 0.0 + 0.0im 0.0 + 0.0im 0.0 + 0.0im 0.0 + 0.0im 0.0 + 0.0im 0.0 + 0.0im 0.0 + 0.0im # Ok, we're ready to get started, but first we'll need some keys julia> kp = keygen(params) CKKS key pair julia> kp.priv CKKS private key julia> kp.pub CKKS public key # Alright, let's encrypt some things: julia> foreach(i->plain[i] = i+1, 0:7); plain 8-element CKKSEncoding{FixedRational{1099511627776,T} where T} with indices 0:7: 1.0 + 0.0im 2.0 + 0.0im 3.0 + 0.0im 4.0 + 0.0im 5.0 + 0.0im 6.0 + 0.0im 7.0 + 0.0im 8.0 + 0.0im julia> c = encrypt(kp.pub, plain) CKKS ciphertext (length 2, encoding CKKSEncoding{FixedRational{1099511627776,T} where T}) # And decrypt it again julia> decrypt(kp.priv, c) 8-element CKKSEncoding{FixedRational{1099511627776,T} where T} with indices 0:7: 0.9999999999995506 - 2.7335193113350057e-16im 1.9999999999989408 - 3.885780586188048e-16im 3.000000000000205 + 1.6772825551165524e-16im 4.000000000000538 - 3.885780586188048e-16im 4.999999999998865 + 8.382500573679615e-17im 6.000000000000185 + 4.996003610813204e-16im 7.000000000001043 - 2.0024593503998215e-16im 8.000000000000673 + 4.996003610813204e-16im # Note that we had some noise. Let's go through all the primitive operations we'll need: julia> decrypt(kp.priv, c+c) 8-element CKKSEncoding{FixedRational{1099511627776,T} where T} with indices 0:7: 1.9999999999991012 - 5.467038622670011e-16im 3.9999999999978817 - 7.771561172376096e-16im 6.00000000000041 + 3.354565110233105e-16im 8.000000000001076 - 7.771561172376096e-16im 9.99999999999773 + 1.676500114735923e-16im 12.00000000000037 + 9.992007221626409e-16im 14.000000000002085 - 4.004918700799643e-16im 16.000000000001346 + 9.992007221626409e-16im julia> csq = c*c CKKS ciphertext (length 3, encoding CKKSEncoding{FixedRational{1208925819614629174706176,T} where T}) julia> decrypt(kp.priv, csq) 8-element CKKSEncoding{FixedRational{1208925819614629174706176,T} where T} with indices 0:7: 0.9999999999991012 - 2.350516767363621e-15im 3.9999999999957616 - 5.773159728050814e-15im 9.000000000001226 - 2.534464540987068e-15im 16.000000000004306 - 2.220446049250313e-15im 24.99999999998865 + 2.0903753311370056e-15im 36.00000000000222 + 4.884981308350689e-15im 49.000000000014595 + 1.0182491378134327e-15im 64.00000000001077 + 4.884981308350689e-15im
So simple! An attentive reader might notice that CSQ is slightly different from the previous ciphertext. In particular, the ciphertext has “length 3” and the scale is much larger. An explanation of what this is and what is needed is beyond the scope of this article. Suffice it to say that we need to lower the values before continuing with the calculations, otherwise the "place" will end in the ciphertext. Fortunately, we can reduce each of the two increased values:
# To get back down to length 2, we need to `keyswitch` (aka # relinerarize), which requires an evaluation key. Generating # this requires the private key. In a real application we would # have generated this up front and sent it along with the encrypted # data, but since we have the private key, we can just do it now. julia> ek = keygen(EvalMultKey, kp.priv) CKKS multiplication key julia> csq_length2 = keyswitch(ek, csq) CKKS ciphertext (length 2, encoding CKKSEncoding{FixedRational{1208925819614629174706176,T} where T}) # Getting the scale back down is done using modswitching. julia> csq_smaller = modswitch(csq_length2) CKKS ciphertext (length 2, encoding CKKSEncoding{FixedRational{1.099511626783e12,T} where T}) # And it still decrypts correctly (though note we've lost some precision) julia> decrypt(kp.priv, csq_smaller) 8-element CKKSEncoding{FixedRational{1.099511626783e12,T} where T} with indices 0:7: 0.9999999999802469 - 5.005163520332181e-11im 3.9999999999957723 - 1.0468514951188039e-11im 8.999999999998249 - 4.7588542623100616e-12im 16.000000000023014 - 1.0413447889166631e-11im 24.999999999955193 - 6.187833723406491e-12im 36.000000000002345 + 1.860733715346631e-13im 49.00000000001647 - 1.442396043149794e-12im 63.999999999988695 - 1.0722489563648028e-10im
In addition, modswitching (short for modulus switching, module switching) reduces the size of the ciphertext module, so that we cannot continue to do this indefinitely (we use a somewhat-homomorphic encryption scheme):
julia> ℛ # Remember the ring we initially created ℤ₁₃₂₉₂₂₇₉₉₇₅₆₈₀₈₁₄₅₇₄₀₂₇₀₁₂₀₇₁₀₄₂₄₈₂₅₇/(x¹⁶ + 1) julia> ToyFHE.ring(csq_smaller) # It shrunk! ℤ₁₂₀₈₉₂₅₈₂₀₁₄₄₅₉₃₇₇₉₃₃₁₅₅₃/(x¹⁶ + 1)</code> — (rotations). keyswitch, (evaluation key, ): <source lang="julia">julia> gk = keygen(GaloisKey, kp.priv; steps=2) CKKS galois key (element 25) julia> decrypt(circshift(c, gk)) decrypt(kp, circshift(c, gk)) 8-element CKKSEncoding{FixedRational{1099511627776,T} where T} with indices 0:7: 7.000000000001042 + 5.68459112632516e-16im 8.000000000000673 + 5.551115123125783e-17im 0.999999999999551 - 2.308655353580721e-16im 1.9999999999989408 + 2.7755575615628914e-16im 3.000000000000205 - 6.009767921608429e-16im 4.000000000000538 + 5.551115123125783e-17im 4.999999999998865 + 4.133860996136768e-17im 6.000000000000185 - 1.6653345369377348e-16im # And let's compare to doing the same on the plaintext julia> circshift(plain, 2) 8-element OffsetArray(::Array{Complex{Float64},1}, 0:7) with eltype Complex{Float64} with indices 0:7: 7.0 + 0.0im 8.0 + 0.0im 1.0 + 0.0im 2.0 + 0.0im 3.0 + 0.0im 4.0 + 0.0im 5.0 + 0.0im 6.0 + 0.0im
We covered the basics of using the HE library. But before moving on to using these primitives to calculate neural network forecasts, let's look at the process of learning it.
Machine learning model
If you are not familiar with machine learning or the Flux.jl library, then I recommend a quick run through the Flux.jl documentation or see a free introduction to machine learning , because we will only discuss changes in applying the model to encrypted data.
Let's start by using the convolutional neural network from the Flux zoo . We will carry out the same training cycle, with data preparation and so on, just set up the model a little. Here she is:
function reshape_and_vcat(x) let y=reshape(x, 64, 4, size(x, 4)) vcat((y[:,i,:] for i=axes(y,2))...) end end model = Chain( # First convolution, operating upon a 28x28 image Conv((7, 7), 1=>4, stride=(3,3), x->x.^2), reshape_and_vcat, Dense(256, 64, x->x.^2), Dense(64, 10), )
This is the same model as in the work “Secure Outsourced Matrix Computation and Application to Neural Networks” , which uses the same cryptographic scheme with two differences: 1) for the sake of simplicity, we did not encrypt the model itself, and 2) after each layer we have Bayesian vectors are used (in Flux this is done by default), I’m not sure what it was in the mentioned work. Perhaps, due to the second point, the accuracy on the test set of our model turned out to be slightly higher (98.6% versus 98.1%), but the hyperparametric differences could also be the reason.
Unusual (for those who have experience in machine learning) is the
x.^2
-activation of functions. Most often in such cases they use
tanh
,
relu
or something more fanciful. But although these functions (especially
relu
) are easily calculated for ordinary text values, however, they may require a lot of computational resources to evaluate them in encrypted form (we usually estimate the polynomial approximation). Fortunately, in this case
x.^2
works great.
The rest of the learning cycle remained the same. We removed
softmax
from the model for the loss-function
logitcrossentropy
(you could leave it and evaluate softmax after decryption on the client). The complete code for training the model lies on GitHub , it runs in a few minutes on any fresh video card.
Effective operations
Now we know what operations we need to perform:
- Coagulation.
- Element squaring.
- Matrix multiplication.
With squaring everything is simple, we have already examined it above, so we will consider two other operations. We assume that the length of the data packet is 64 (you might notice that the model parameters and the packet size are chosen so as to take advantage of the 4096-element vector that we obtained as a result of a realistic choice of parameters).
Coagulation
Recall how coagulation works. Take a window (in our case 7x7) of the original input array, and each window element is multiplied by a convolution mask element. Then we move the window to some step (in our case, the step is 3, that is, we move 3 elements) and repeat the process (with the same convolution mask). Below is the animation of the process ( source ) for 3x3 convolution with step
(2, 2)
(blue array - input, green - output):
In addition, we perform convolution in four different “channels” (that is, we repeat convolution 3 more times with different masks).
Now we know what to do, it remains to understand how. We are lucky that the convolution is the first operation in our model. As a result, in order to save resources, we can pre-process the data on the client, and then encrypt them (without using weights). Let's do this:
- First, we compute each convolution window (that is, a 7x7 sample from the source images), which gives us 64 7x7 matrices for each input image. Note that for a 7x7 window in increments of 2, there will be 8x8 convolution windows for evaluating the 28x28 input image.
- Let's collect in one vector the same positions in each window. That is, for each image we will have a 64-element vector, or a vector of 64x64 elements for a packet of size 64 (a total of 49 matrices 64x64).
- We will encrypt.
Then coagulation simply turns into scalar multiplication of the entire matrix with the corresponding mask element. And summing up later all 49 elements, we get the result of folding. Here's what the implementation of this strategy might look like (in plain text):
function public_preprocess(batch) ka = OffsetArray(0:7, 0:7) # Create feature extracted matrix I = [[batch[i′*3 .+ (1:7), j′*3 .+ (1:7), 1, k] for i′=ka, j′=ka] for k = 1:64] # Reshape into the ciphertext Iᵢⱼ = [[I[k][l...][i,j] for k=1:64, l=product(ka, ka)] for i=1:7, j=1:7] end Iᵢⱼ = public_preprocess(batch) # Evaluate the convolution weights = model.layers[1].weight conv_weights = reverse(reverse(weights, dims=1), dims=2) conved = [sum(Iᵢⱼ[i,j]*conv_weights[i,j,1,channel] for i=1:7, j=1:7) for channel = 1:4] conved = map(((x,b),)->x .+ b, zip(conved, model.layers[1].bias))
This (module for changing dimensions) (modulo - changing the order of sizes) gives the same answer as the operations of
model.layers[1](batch)
.
Add encryption operations:
Iᵢⱼ = public_preprocess(batch) C_Iᵢⱼ = map(Iᵢⱼ) do Iij plain = CKKSEncoding{Tscale}(zero(plaintext_space(ckks_params))) plain .= OffsetArray(vec(Iij), 0:(N÷2-1)) encrypt(kp, plain) end weights = model.layers[1].weight conv_weights = reverse(reverse(weights, dims=1), dims=2) conved3 = [sum(C_Iᵢⱼ[i,j]*conv_weights[i,j,1,channel] for i=1:7, j=1:7) for channel = 1:4] conved2 = map(((x,b),)->x .+ b, zip(conved3, model.layers[1].bias)) conved1 = map(ToyFHE.modswitch, conved2)
Please note that keyswitch is not required here because weights are public. So we do not increase the length of the ciphertext.
Matrix multiplication
Moving on to matrix multiplication, we can use the rotation of elements in the vector to change the order of the multiplication indices. Let's consider row layout of matrix elements in a vector. If we shift the vector by a multiple of the size of the row, we get the effect of column rotation, which is sufficient primitive to implement matrix multiplication (at least square matrices). Let's try:
function matmul_square_reordered(weights, x) sum(1:size(weights, 1)) do k # We rotate the columns of the LHS and take the diagonal weight_diag = diag(circshift(weights, (0,(k-1)))) # We rotate the rows of the RHS x_rotated = circshift(x, (k-1,0)) # We do an elementwise, broadcast multiply weight_diag .* x_rotated end end function matmul_reorderd(weights, x) sum(partition(1:256, 64)) do range matmul_square_reordered(weights[:, range], x[range, :]) end end fc1_weights = model.layers[3].W x = rand(Float64, 256, 64) @assert (fc1_weights*x) ≈ matmul_reorderd(fc1_weights, x)
Of course, for the general matrix multiplication, something more complicated is required, but for now this is enough.
Improving the technique
Now all the components of our technique work. Here is the whole code (except for setting selection options and similar things):
ek = keygen(EvalMultKey, kp.priv) gk = keygen(GaloisKey, kp.priv; steps=64) Iᵢⱼ = public_preprocess(batch) C_Iᵢⱼ = map(Iᵢⱼ) do Iij plain = CKKSEncoding{Tscale}(zero(plaintext_space(ckks_params))) plain .= OffsetArray(vec(Iij), 0:(N÷2-1)) encrypt(kp, plain) end weights = model.layers[1].weight conv_weights = reverse(reverse(weights, dims=1), dims=2) conved3 = [sum(C_Iᵢⱼ[i,j]*conv_weights[i,j,1,channel] for i=1:7, j=1:7) for channel = 1:4] conved2 = map(((x,b),)->x .+ b, zip(conved3, model.layers[1].bias)) conved1 = map(ToyFHE.modswitch, conved2) Csqed1 = map(x->x*x, conved1) Csqed1 = map(x->keyswitch(ek, x), Csqed1) Csqed1 = map(ToyFHE.modswitch, Csqed1) function encrypted_matmul(gk, weights, x::ToyFHE.CipherText) result = repeat(diag(weights), inner=64).*x rotated = x for k = 2:64 rotated = ToyFHE.rotate(gk, rotated) result += repeat(diag(circshift(weights, (0,(k-1)))), inner=64) .* rotated end result end fq1_weights = model.layers[3].W Cfq1 = sum(enumerate(partition(1:256, 64))) do (i,range) encrypted_matmul(gk, fq1_weights[:, range], Csqed1[i]) end Cfq1 = Cfq1 .+ OffsetArray(repeat(model.layers[3].b, inner=64), 0:4095) Cfq1 = modswitch(Cfq1) Csqed2 = Cfq1*Cfq1 Csqed2 = keyswitch(ek, Csqed2) Csqed2 = modswitch(Csqed2) function naive_rectangular_matmul(gk, weights, x) @assert size(weights, 1) < size(weights, 2) weights = vcat(weights, zeros(eltype(weights), size(weights, 2)-size(weights, 1), size(weights, 2))) encrypted_matmul(gk, weights, x) end fq2_weights = model.layers[4].W Cresult = naive_rectangular_matmul(gk, fq2_weights, Csqed2) Cresult = Cresult .+ OffsetArray(repeat(vcat(model.layers[4].b, zeros(54)), inner=64), 0:4095)
It doesn’t look too neat, but if you did all this, you should understand every step.
Now let's think about what abstractions could simplify our lives. We are leaving the field of cartography and machine learning and moving on to the architecture of the programming language, so let's take advantage of the fact that Julia allows you to use and create powerful abstractions. For example, you can encapsulate the entire process of extracting convolutions into your array type:
using BlockArrays """ ExplodedConvArray{T, Dims, Storage} <: AbstractArray{T, 4} Represents a an `nxmx1xb` array of images, but rearranged into a series of convolution windows. Evaluating a convolution compatible with `Dims` on this array is achievable through a sequence of scalar multiplications and sums on the underling storage. """ struct ExplodedConvArray{T, Dims, Storage} <: AbstractArray{T, 4} # sx*sy matrix of b*(dx*dy) matrices of extracted elements # where (sx, sy) = kernel_size(Dims) # (dx, dy) = output_size(DenseConvDims(...)) cdims::Dims x::Matrix{Storage} function ExplodedConvArray{T, Dims, Storage}(cdims::Dims, storage::Matrix{Storage}) where {T, Dims, Storage} @assert all(==(size(storage[1])), size.(storage)) new{T, Dims, Storage}(cdims, storage) end end Base.size(ex::ExplodedConvArray) = (NNlib.input_size(ex.cdims)..., 1, size(ex.x[1], 1)) function ExplodedConvArray{T}(cdims, batch::AbstractArray{T, 4}) where {T} x, y = NNlib.output_size(cdims) kx, ky = NNlib.kernel_size(cdims) stridex, stridey = NNlib.stride(cdims) kax = OffsetArray(0:x-1, 0:x-1) kay = OffsetArray(0:x-1, 0:x-1) I = [[batch[i′*stridex .+ (1:kx), j′*stridey .+ (1:ky), 1, k] for i′=kax, j′=kay] for k = 1:size(batch, 4)] Iᵢⱼ = [[I[k][l...][i,j] for k=1:size(batch, 4), l=product(kax, kay)] for (i,j) in product(1:kx, 1:ky)] ExplodedConvArray{T, typeof(cdims), eltype(Iᵢⱼ)}(cdims, Iᵢⱼ) end function NNlib.conv(x::ExplodedConvArray{<:Any, Dims}, weights::AbstractArray{<:Any, 4}, cdims::Dims) where {Dims<:ConvDims} blocks = reshape([ Base.ReshapedArray(sum(xx[i,j]*weights[i,j,1,channel] for i=1:7, j=1:7), (NNlib.output_size(cdims)...,1,size(x, 4)), ()) for channel = 1:4 ],(1,1,4,1)) BlockArrays._BlockArray(blocks, BlockArrays.BlockSizes([8], [8], [1,1,1,1], [64])) end
Here we again used
BlockArrays
to represent an
8x8x4x64
array as four
8x8x1x64
arrays as in the source code. Now the presentation of the first stage has become much more beautiful, at least with unencrypted arrays:
julia> cdims = DenseConvDims(batch, model.layers[1].weight; stride=(3,3), padding=(0,0,0,0), dilation=(1,1)) DenseConvDims: (28, 28, 1) * (7, 7) -> (8, 8, 4), stride: (3, 3) pad: (0, 0, 0, 0), dil: (1, 1), flip: false julia> a = ExplodedConvArray{eltype(batch)}(cdims, batch); julia> model(a) 10×64 Array{Float32,2}: [snip]
Now how do we connect this with encryption? To do this, you need:
- Encrypt the structure (
ExplodedConvArray
) so that we get the ciphertext for each field. Operations with such an encrypted structure will verify what the function would do with the original structure, and do the same thing homomorphically.
- Intercept certain operations in order to perform them differently in an encrypted context.
Fortunately, Julia provides us with an abstraction for this: a compiler plugin that uses the Cassette.jl mechanism. I won’t tell you what it is and how it works, I’ll briefly say that it can determine the context, for example,
Encrypted
, and then it defines the rules how operations should work in this context. For example, you can write this for the second requirement:
# Define Matrix multiplication between an array and an encrypted block array function (*::Encrypted{typeof(*)})(a::Array{T, 2}, b::Encrypted{<:BlockArray{T, 2}}) where {T} sum(a*b for (i,range) in enumerate(partition(1:size(a, 2), size(b.blocks[1], 1)))) end # Define Matrix multiplication between an array and an encrypted array function (*::Encrypted{typeof(*)})(a::Array{T, 2}, b::Encrypted{Array{T, 2}}) where {T} result = repeat(diag(a), inner=size(a, 1)).*x rotated = b for k = 2:size(a, 2) rotated = ToyFHE.rotate(GaloisKey(*), rotated) result += repeat(diag(circshift(a, (0,(k-1)))), inner=size(a, 1)) .* rotated end result end
As a result, the user will be able to write all of the above with a minimum amount of manual work:
kp = keygen(ckks_params) ek = keygen(EvalMultKey, kp.priv) gk = keygen(GaloisKey, kp.priv; steps=64) # Create evaluation context ctx = Encrypted(ek, gk) # Do public preprocessing batch = ExplodedConvArray{eltype(batch)}(cdims, batch); # Run on encrypted data under the encryption context Cresult = ctx(model)(encrypt(kp.pub, batch)) # Decrypt the answer decrypt(kp, Cresult)
Of course, even this may not be enough. ( ℛ, modswitch, keyswitch ..) , . , , , , .
Conclusion
— . Julia . RAMPARTS ( paper , JuliaCon talk ) : Julia- - PALISADE. Julia Computing RAMPARTS Verona, . , . . , , .
, ToyFHE . , , , .