- Armonk NY, US Heqing Huang - Mahwah NJ, US Jialong Zhang - White Plains NY, US Dong Su - Sunnyvale CA, US Dimitrios Pendarakis - Westport CT, US Ian M. Molloy - Chappaqua NY, US
International Classification:
G06N 3/08 G06F 21/53 G06F 21/60 G06N 3/063
Abstract:
Mechanisms are provided to implement an enhanced privacy deep learning system framework (hereafter “framework”). The framework receives, from a client computing device, an encrypted first subnet model of a neural network, where the first subnet model is one partition of multiple partitions of the neural network. The framework loads the encrypted first subnet model into a trusted execution environment (TEE) of the framework, decrypts the first subnet model, within the TEE, and executes the first subnet model within the TEE. The framework receives encrypted input data from the client computing device, loads the encrypted input data into the TEE, decrypts the input data, and processes the input data in the TEE using the first subnet model executing within the TEE.
Protecting Cognitive Systems From Model Stealing Attacks
- Armonk NY, US Ian M. Molloy - Chappaqua NY, US Dong Su - Sunnyvale CA, US
International Classification:
G06F 21/60 G06N 3/08 G06N 3/04 G06Q 10/06
Abstract:
Mechanisms are provided for obfuscating training of trained cognitive model logic. The mechanisms receive input data for classification into one or more classes in a plurality of predefined classes as part of a cognitive operation of the cognitive system. The input data is processed by applying a trained cognitive model to the input data to generate an output vector having values for each of the plurality of predefined classes. A perturbation insertion engine modifies the output vector by inserting a perturbation in a function associated with generating the output vector, to thereby generate a modified output vector. The modified output vector is then output. The perturbation modifies the one or more values to obfuscate the trained configuration of the trained cognitive model logic while maintaining accuracy of classification of the input data.
- Armonk NY, US Suresh N. Chari - YORKTOWN HEIGHTS NY, US Ashish Kundu - YORKTOWN HEIGHTS NY, US Sridhar Muppidi - AUSTIN TX, US Dong Su - Sunnyvale CA, US
An example operation may include one or more of receiving, by a blockchain node or peer of a blockchain network, attribute data for a user profile, creating blockchain transactions to store attribute hashes and metadata to a shared ledger, receiving a user profile query from an identity consumer, creating blockchain transactions to retrieve attribute hashes and metadata corresponding to the query, reconstructing the user profile from the metadata, responding to the query by providing attribute data to the identity consumer, and creating and storing hashes of the attribute data and metadata to the shared ledger.
System For Measuring Information Leakage Of Deep Learning Models
- Armonk NY, US Heqing HUANG - Mahwah NJ, US Jialong ZHANG - White Plains NY, US Dong SU - Elmsford NY, US Dimitrios PENDARAKIS - Westport CT, US Ian Michael MOLLOY - Westchester NY, US
International Classification:
G06N 3/08 G06N 3/04
Abstract:
Using a deep learning inference system, respective similarities are measured for each of a set of intermediate representations to input information used as an input to the deep learning inference system. The deep learning inference system includes multiple layers, each layer producing one or more associated intermediate representations. Selection is made of a subset of the set of intermediate representations that are most similar to the input information. Using the selected subset of intermediate representations, a partitioning point is determined in the multiple layers used to partition the multiple layers into two partitions defined so that information leakage for the two partitions will meet a privacy parameter when a first of the two partitions is prevented from leaking information. The partitioning point is output for use in partitioning the multiple layers of the deep learning inference system into the two partitions.
- Armonk NY, US Heqing Huang - Mahwah NJ, US Jialong Zhang - San Jose CA, US Dong Su - Sunnyvale CA, US Dimitrios Pendarakis - Westport CT, US Ian M. Molloy - Chappaqua NY, US
International Classification:
G06N 3/08 G06F 21/60
Abstract:
Deep learning training service framework mechanisms are provided. The mechanisms receive encrypted training datasets for training a deep learning model, execute a FrontNet subnet model of the deep learning model in a trusted execution environment, and execute a BackNet subnet model of the deep learning model external to the trusted execution environment. The mechanisms decrypt, within the trusted execution environment, the encrypted training datasets and train the FrontNet subnet model and BackNet subnet model of the deep learning model based on the decrypted training datasets. The FrontNet subnet model is trained within the trusted execution environment and provides intermediate representations to the BackNet subnet model which is trained external to the trusted execution environment using the intermediate representations. The mechanisms release a trained deep learning model comprising a trained FrontNet subnet model and a trained BackNet subnet model, to the one or more client computing devices.
- Armonk NY, US HASINI GUNASINGHE - WEST LAFAYETTE IN, US ASHISH KUNDU - ELMSFORD NY, US KAPIL KUMAR SINGH - CARY NC, US DONG SU - ELMSFORD NY, US
International Classification:
H04L 9/32 H04L 29/06
Abstract:
A processor-implemented method facilitates identity exchange in a decentralized setting. A first system performs a pseudonymous handshake with a second system that has created an identity asset that identifies an entity. The second system has transmitted the identity asset to a third system, which is a set of peer computers that support a blockchain that securely maintains a ledger of the identity asset. The first system transmits a set of pseudonyms to the third system, where the set of pseudonyms comprises a first pseudonym that identifies the first system, a second pseudonym that identifies a user of the second system, and a third pseudonym that identifies the third system. The first system receives the identity asset from the third system, which securely ensures a validity of the identity asset as identified by the first pseudonym, the second pseudonym, and the third pseudonym.
Privacy Enhancing Deep Learning Cloud Service Using A Trusted Execution Environment
- Armonk NY, US Heqing Huang - Mahwah NJ, US Jialong Zhang - White Plains NY, US Dong Su - Sunnyvale CA, US Dimitrios Pendarakis - Westport CT, US Ian M. Molloy - Chappaqua NY, US
International Classification:
G06N 3/08 G06F 21/60 G06F 21/53
Abstract:
Mechanisms are provided to implement an enhanced privacy deep learning system framework (hereafter “framework”). The framework receives, from a client computing device, an encrypted first subnet model of a neural network, where the first subnet model is one partition of multiple partitions of the neural network. The framework loads the encrypted first subnet model into a trusted execution environment (TEE) of the framework, decrypts the first subnet model, within the TEE, and executes the first subnet model within the TEE. The framework receives encrypted input data from the client computing device, loads the encrypted input data into the TEE, decrypts the input data, and processes the input data in the TEE using the first subnet model executing within the TEE.
- Armonk NY, US Taesung Lee - Ridgefield CT, US Ian M. Molloy - Chappaqua NY, US Dong Su - Elmsford NY, US
International Classification:
G06N 3/08 G06N 3/04 H04L 29/06
Abstract:
Mechanisms are provided to implement a hardened neural network framework. A data processing system is configured to implement a hardened neural network engine that operates on a neural network to harden the neural network against evasion attacks and generates a hardened neural network. The hardened neural network engine generates a reference training data set based on an original training data set. The neural network processes the original training data set and the reference training data set to generate first and second output data sets. The hardened neural network engine calculates a modified loss function of the neural network, where the modified loss function is a combination of an original loss function associated with the neural network and a function of the first and second output data sets. The hardened neural network engine trains the neural network based on the modified loss function to generate the hardened neural network.
Ibm Nov 2016 - Jun 2018
Postdoctoral Researcher
Alibaba Group Nov 2016 - Jun 2018
Senior Engineer
Purdue University Aug 2010 - Dec 2016
Research Assistant
Samsung Electronics May 2013 - Aug 2013
Research Intern
Chinese Academy of Sciences 2007 - 2010
Research Assistant
Education:
Purdue University 2010 - 2016
Doctorates, Doctor of Philosophy, Computer Science
Purdue University 2010 - 2015
Masters, Computer Science
Chinese Academy of Sciences 2007 - 2010
Masters
Tianjin University 2001 - 2005
Bachelors, Software Engineering
Skills:
Python Machine Learning C++ Java Matlab R Data Mining Information Security Data Privacy Database Systems Hadoop Mysql Redis Mapreduce Applied Cryptography Git Weka Research Linux C Algorithms Computer Science Latex Software Engineering Unix Programming