Page tree
Skip to end of metadata
Go to start of metadata

How are you ensuring data privacy?

All data is client-side encrypted and will only be encrypted by the owner if he agrees to participate in a remotely triggered data collection.


Queries for data are processed on demand, therefore the data stays encrypted until the user agrees to publish the requested data. After approval, the data get’s decrypted by the user and will be transferred via a secured channel to a secured environment where the data will be processed. After processing took place all data sets will be deleted from the environment so that the risk of unwanted data breaches within the system will be minimized.

How does the analysis buyer communicates with the plug-in provider?

The analysis results buyer will be provided with a simple feedback/request channel to the plug-in providers through an easy to use front end, e.g. checkboxes, filtering mechanism.

How do you use Blockchain?

In the first stage we use the Blockchain as a settlement and control layer. Through smart contracts, we can guarantee an immutable execution of an analysis and participation in the network. Further, the Blockchain approach is a very efficient way to display processes that need to be made public. With the further technological development of the blockchain and DLT sphere, we envision that more and more parts of the ecosystem will be decentralized in future.

How do you validate if data is real or faked?

We'll have different layers. First of all, data can only be put into the system by verified/validated sources, then the data will be converted based on a globally valid semantic data model which inherits limit values based on the datatype. When the data is processed within the trusted execution environment in the first step "security plug-ins" will be executed to filter out random/malicious data. In general, it's interesting to think about what "accurate" exactly means. So besides the general mechanism of filtering out randomly generated data and malicious data sources, the accuracy of the data is bound to the scope of usage and therefore related to the data models within the system. So basically for every "datatype" special plug-ins (later on maybe AIs ) need to be developed that guarantee a certain quality. Specialized control and deep learning plug-ins will learn and monitor the quality of the data according to the data model standards of each industry. For example, if you're thinking about healthcare data, the differing data might be more interesting than the rest. Therefore, the scope is important.

What are the tasks of the smart contracts?

The smart contracts provide immutability and transparency for the participants in the payment and purchase process. Moreover, the analysis processing will be handled by smart contracts.

What if Lisk is not going achieve their goals, do you have a Plan-B?

We have development aspects in our project that are decoupled from the blockchain aspect on our platform, so we can move on even without the SDK. For the very unlikely event of Lisk announcing that their ambitions are impossible to achieve, our blockchain layer could be migrated to suitable alternatives easily. Because the alternatives are in a very early development phase as well, we are carefully watching and evaluating the risks. From our perspective Lisk is the best one suitable for us in terms of maturity and development potential. So we have no worries at all. Lisk is working hard to develop disruptive technology and this needs to be done and tested carefully.

What incentive do your node maintainer have if the PAX supply is fixed?

There will definitely be MADANA Nodes and they will get a fair share for their work/ contribution within the system. But in the current conceptual design these node aren't directly related to the consensus model/algorithm of the sidechain itself.

What is a plug-in?

As a plug-in, we describe a specialized analysis program a plug-in provider can offer everyone on our platform. These are tailored to handle the data models in our network and offer the data analysis buyer an easy to conduct analysis process on our platform. The plug-in provider gets a reward for every time his plug-in is being used.

What part of the code will be open source?

We intend to follow the zeitgeist of the blockchain and the open source community and intend to develop in a transparent manner. Mostly everything will be open source.

When is something encrypted?

Before data producers can start to collect and afterward participate with their data in the system, they have to create a key pair. All data that goes into the data producers’ local data store which then will be encrypted with the given public key. Therefore, only the originator itself has access to its data. When data is transferred to the analysis processing software it is beforehand encrypted with the fresh generated public key of the analysis processing entity – thus only the processing node in the trusted environment has access to the data, nobody else. After the analysis result has been created it will be encrypted with the public key of the requesting instance, so that only the buyer has access to the result and nobody else.

Where is the data stored?

Created data is directly being client-side encrypted and stored on the originator's device.

Why a sidechain on Lisk?

Because Lisk proofed us its commitment to develop the right tools for developers and since the Lisk blockchain with its sidechains is better tailored to our needs.

Why won't MADANA be able to access the data?

At no point in the process, MADANA itself has access to the raw data that will be processed. The MADANA platform only transfers encrypted data from the data producer side to the analysis processing side and manages the connection between the two sides. Metaphorically spoken, MADANA acts like a mail carrier without being able to look into the encrypted package.

  • No labels