Wednesday, 11 September 2013

e-Health Journal paper:

  • Accepted to be published by Future Generation Computer Systems, a Springer Journal.

Secure Controlled Data Sharing Paper:
  • Implemented and optimised a first draft application and awaiting to be ported to the TED device.
  • A first draft of the paper has also been completed with missing Implementation and Evaluation sections.
Collaboration with LaTTe:
  • Retrieved Tracer code and brainstorming ideas on how to secure a relational database.
  • Will soon present my work to the LaTTe group as part of my hackday.

2 papers reviewed:
Guojun Wang, Fengshun Yue, and Qin Liu, "A Secure Self-Destructing Scheme for Electronic Data," Journal of Computer and System Sciences (Elsevier), 79(2): 279-290, March 2013.

Key points:
    • Exposing sensitive electronic data in the internet has become easier.
    • Service providers leak messages for gaining profits and supporting investigation.
    • Previous work’s limitations include decryption key being accidentally disclosed to unauthorised users, untrustworthy third parties for profit or investigation, and in Geambasu’s scheme in which this work is referred, the entire ciphertext can still be obtained and is susceptible to brute-force attack.
    • Main idea is to encapsulate data and key in objects and destroying the data and key after a period of time as specified by the owner. Data and key should be destroyed automatically without any user intervention.
    • Data is encapsulated in Vanishing Data Objects (VDOs) and is only decapsulated by trusted authorised users.
    • Data is stored in a Distributed Hash Table (DHT) as it makes room for newer data by discarding older data after a set time (decryption key and part of ciphertext are destroyed after a certain period of time). DHT allows huge size, geographic distribution and decentralisation making attacks in the DHT network difficult.
    • The paper devises a cryptosystem that allows keys to be generated according to the policies and the client’s credentials efficiently.

  • The system is flexible in allowing any type of encryption scheme for the data without any alterations.
  • Paper assumes trusted authorised users since it is impossible for the system to protect sensitive user data is authorised users leak plaintext data recovered from the VDO.

Guojun Wang, Qin Liu, and Jie Wu, "Achieving Fine-Grained Access Control for Secure Data Sharing on Cloud Servers," Wiley's Concurrency and Computation: Practice and Experience, 23(12): 1443–1464, August 2011.

Key points:
    • Data sharing has attracted a lot of attention in both the industry and academic communities.
    • A CSP may sell confidential information about an enterprise to the enterprise’s closest business competitors for profit and hence raises privacy and security issues which will result in a huge loss for enterprises.
    • Introduces the conjunctive precise and fuzzy identity-based encryption (PFIBE) scheme for secure data sharing on cloud servers.
    • Encrypts data based on user id or access control policy over attributes such that the corresponding user with the user id or satisfying the access control policy can decrypt the data.
    • Combines Hierarchical Identity-Based Encryption (HIBE) system and the ciphertext-policy attribute-based encryption (CP-ABE) system.
  • Provides fine-grained access control to data.
  • High performance and flexibility.
  • System assumes trusted authorised users.
  • Paper has complex mathematics.

Monday, 15 July 2013

4 papers reviewed:
Keith Frikken, Mikhail Atallah, and Jiangtao Li. 2006. Attribute-Based Access Control with Hidden Policies and Hidden Credentials. IEEE Trans. Comput. 55, 10 (October 2006), 1259-1270. DOI=10.1109/TC.2006.158

Key points:

    • Hiding the access policy from clients and hiding client attributes from the server.
    • Previous works on this topic revealed parts of the ACP to clients. The proposed solution claims to reveal nothing of the ACP.
    • A client and owner engage in a protocol. The client provides the protocol a subset of her credentials and the owner provides to client the hidden ACP and protected data. If attributes in credentials supplied to protocols satisfy the ACP, she gets the revealed data.
    • Uses techniques of homomorphic encryption, oblivious transfer, scrambled circuit evaluation and shuffling.
  • Strengths/Weakness:
    • The client learns little information as possible about the ACP and the owner learns as little information as possible about the client’s credentials.
    • The server does not learn which credentials a client has from the protocols.
    • The scheme is policy indistinguishable in that 2 policies that evaluate to the same value for the client’s credentials have indistinguishable transcripts and hence client learns nothing about the policy other than whether access is granted.
    • Scheme relies heavily on exchange of information which could leak some information potentially.
    • With growing number of attributes, communication complexity increases exponentially.
    • System works only for policies that check for the presence of certain attributes.

Deqing Zou; Zhensong Liao, "A New Approach for Hiding Policy and Checking Policy Consistency," Information Security and Assurance, 2008. ISA 2008. International Conference on , vol., no., pp.237,242, 24-26 April 2008
doi: 10.1109/ISA.2008.39

Key points:

    • Disclosure of sensitive policies may cause damages
    • Furthermore, some polices tend to be self-contradictory and hence a checking mechanism is required.
    • MAC and RBAC techniques could not work well in terms of resource sharing due to limitations in their design and application.
    • A new method to hide access control policy using ATN (Automated Trust Negotiation).
    • A new thought to handle policy consistency.
    • New approach for protecting user’s privacy.
    • Avoiding unwanted negotiation failure and improving negotiation efficiency.
  • Strengths/Weakness:
    • Previous work is shown to be effective but difficult to implement in the real world and hence claims the new solution will be efficient to implement. 
    • Paper uses matrices and is very mathematical.

Xinfeng Ye; Mingyu Gao, "Access Control with Hidden Policies and Credentials for Service Computing," Services Computing (SCC), 2012 IEEE Ninth International Conference on , vol., no., pp.242,249, 24-29 June 2012 doi: 10.1109/SCC.2012.13

Key points:

    • How to keep credentials and access control policies secret from the service providers.
    • Scheme uses cryptographic techniques to hide the policies and credentials needed to access data.
    • Cryptographic keys are used to represent the credentials and policies.
    • The paper devises a cryptosystem that allows keys to be generated according to the policies and the client’s credentials efficiently.
  • Strengths/Weakness:
    • Many previous works do not attempt to hide the policies or credentials and hence the novelty of the work is good.
    • Previous works that focus on policy hiding are computationally intensive and very inefficient.

Marian Harbach, Sascha Fahl, Michael Brenner, Thomas Muders, and Matthew Smith. 2012. Towards privacy-preserving access control with hidden policies, hidden credentials and hidden decisions. In Proceedings of the 2012 Tenth Annual International Conference on Privacy, Security and Trust (PST) (PST '12). IEEE Computer Society, Washington, DC, USA, 17-24. DOI=10.1109/PST.2012.6297915

Key points:

    • The need for hidden policies, hidden credentials, and hidden decisions.
    • The central issue with resource sharing in the Cloud is that of trust.
    • Argue for the need for hidden policies, credentials and decisions.
    • Present an approach using Homomorphic cryptography Supported Access Control (HSAC) as a first step to achieving the above properties.
    • The paper devises a cryptosystem that allows keys to be generated according to the policies and the client’s credentials efficiently.
  • Strengths/Weakness:
    • Many previous works do not attempt to hide the policies or credentials and hence the novelty of the work is good.
    • Previous works that focus on policy hiding are computationally intensive and very inefficient.

Sunday, 14 July 2013

5 papers reviewed:
Divyakant Agrawal, Sudipto Das, and Amr El Abbadi. 2011. Big data and cloud computing: current state and future opportunities. In Proceedings of the 14th International Conference on Extending Database Technology (EDBT/ICDT '11), Anastasia Ailamaki, Sihem Amer-Yahia, Jignesh Pate, Tore Risch, Pierre Senellart, and Julia Stoyanovich (Eds.). ACM, New York, NY, USA, 530-533. DOI=10.1145/1951365.1951432

Key points:

  • Provides summary of the current state of big data
    • Provides study of big data and an in-depth analysis supporting update heavy applications
    • Provides study of big data supporting systems with ad-hoc analytics and decision support.
    • Key-Value stores very popular for big data and using tools such as Hadoop
  • Strengths/Weakness:
    • Provides summary of big data used in update heavy web applications and in analytics and decision support for competitive marketing.
    • Tutorial not extensive enough and not clear enough.

Christian Cachin, Kristiyan Haralambiev, Hsu-Chun Hsiao, and Alessandro Sorniotti. Policy-based secure deletion. Research Report RZ 3843, IBM Research, 2013.

Key points:

  • How to securely delete data from storage systems
    • Modern storage systems do not reliably destroy stored data and leave traces.
    • Users would like to control how data is deleted since storage systems usually still leave traces of data even after a deletion operation is called.
    • Introduces a secure deletion scheme from encryption and threshold secret sharing
    • Stored data is grouped into protection classes, and attributes control the selective erasure of data through a policy.
    • A set of attributes is given as arguments to the secure deletion scheme, the scheme then sets corresponding nodes in the graph to TRUE and at master key update, corresponding files will no longer be accessible.
    • Also presents a prototype implementation of secure deletion scheme.
  • Strengths/Weakness:
    • Useful way to delete a large number of files quickly.
    • Eventually, there will be a clutter of illegible data stored in storage systems making it slightly inefficient.
    • Also, an attacker may attempt brute force attacks to eventually decrypt the data.

Changqing Ji; Yu Li; Wenming Qiu; Awada, U.; Keqiu Li, "Big Data Processing in Cloud Computing Environments," Pervasive Systems, Algorithms and Networks (ISPAN), 2012 12th International Symposium on , vol., no., pp.17,23, 13-15 Dec. 2012
doi: 10.1109/I-SPAN.2012.9

Key points:

  • Effective management and analysis of large-scale data poses an interesting and critical challenge.
    • DBMS’s are not suitable for processing extremely large scale data.
    • A Big Data platform is needed.
    • Provides status of big data studies and related works which provides general view of big data management technologies and applications.
    • Provides overview of major approaches of big data such as MapReduce
    • Discusses open issues and challenges of processing big data in terms of three aspects, namely; big data storage, analysis and security.
  • Strengths/Weakness:
    • Provides good overview and definition of big data
    • Provides good up-to-date current research of big data
    • Slightly difficult to understand.

Zeeshan Pervez, Asad M. Khattak, Sungyoung Lee, Young-Koo Lee, Eui-Nam Huh: Oblivious access control policies for cloud based data sharing systems. Computing (2012) Journal Article: 1-24

Key points:

  • How to hide access control policies from the Cloud
    • Revealing ACP and access parameters to Cloud loses its efficacy
    • Important to design a system that can ensure end-to-end privacy, involving ACP, access parameters and outsourced data
    • A new access control mechanism called Oblivious Access Control Policy Evaluation (O-ACE) where ACP and access parameters are concealed from the cloud
    • O-ACE ensures end-to-end privacy using standard cryptographic primitives
    • O-ACE has been implemented in Google Cloud using Google App Engine.
    • Strengths/Weakness:
    • Many works do not focus on protecting ACP, and hence this is a useful and interesting paper.
    • Very easy to understand paper with good flow

Mohamed Meky, Amjad Ali: A Novel and Secure Data Sharing Model with Full Owner Control in the Cloud Environment. International Journal of Computer Science and Information Security Vol. 9 No. 6 (2011): 12 - 17

Key points:

  • How to provide data owner control over data in the Cloud in terms of confidentiality and integrity.
    • Security threats of unauthorised data access, compromised data integrity and confidentiality, less direct control of data by data owners over data stored in the Cloud.
    • A secure model that allows the data owner to have full control to grant or deny data sharing in the Cloud environment.
    • The model ensures confidentiality and integrity, and prevents Cloud providers from revealing data to unauthorised users.
    • The model can be implemented for several applications using a variety of data formats and any encryption algorithm.
  • Strengths/Weakness:
    • Data is kept secret from the Cloud provider and unauthorised users quite well.
    • Data integrity is also guaranteed quite well although other attacks such as forgery can still compromise integrity.
    • The data owner is required to store every users secrets and keys. This can become highly inefficient when data owners want to share data with millions of users.
    • Does not provide data owner the level of control of how their data is to be used and prevent copying, redistributing, etc.

ACM CCS '13 Conference Paper:
FGCS eHealth Journal paper:
  • Submitted new revision and waiting on outcome.
Book chapter:
  • Submitted camera-ready proof of paper and awaiting results.

Tuesday, 23 April 2013

4 Papers reviewed:

Jin Li; Gansen Zhao; Xiaofeng Chen; Dongqing Xie; Chunming Rong; Wenjun Li; Lianzhang Tang; Yong Tang, "Fine-Grained Data Access Control Systems with User Accountability in Cloud Computing," Cloud Computing Technology and Science (CloudCom), 2010 IEEE Second International Conference on, vol., no., pp.89,96, Nov. 30 2010-Dec. 3 2010
doi: 10.1109/CloudCom.2010.44

Key Points:
  • PROBLEM: How to provide data security and access control for outsourced sensitive data sharing via Cloud. Also how to prevent illegal key sharing among dishonest authorised users.
    • For each file, achieves to define and enforce access policies based on attributes in the system. Can only access file is user attributes satisfy the file access structure. A file is encrypted with a symmetric key. This key is then encapsulated using the CP-ABE scheme. Users can decrypt the key if they possess attributes according to the CP-ABE scheme and consequently decrypt the data itself.
    • Achieve user accountability in fine-grained data access control systems. Implemented by traitor tracing technique.
    • Deploy Cloud servers to carry out revocation operations
  • Complexity of file encryption only related to number of access policies associated with file and not number of users
  • Creation and deletion of files and users only affect the file/user in question and doesn’t involve system wide updates or rekeying.
  • The heavy operations of user revocation is delegated to Cloud. Even though burden is off user, it still not a clean solution as the Cloud may have to deal with millions of heavy revocation operations.

Gerome Miklau and Dan Suciu. 2003. Controlling access to published data using cryptography. InProceedings of the 29th international conference on Very large data bases - Volume 29 (VLDB '03), Johann Christoph Freytag, Peter C. Lockemann, Serge Abiteboul, Michael J. Carey, Patricia G. Selinger, and Andreas Heuer (Eds.), Vol. 29. VLDB Endowment 898-909.

Key Points:
  • PROBLEM: Trust, privacy and security issues involved when sharing data are immense, however imperative when users are encouraged or forced.
    • Provides protection of XML files
    • Data owner defines a high-level access policies which converts to queries and later provides a single “protection” for XML data.
    • A logical data model for these protections is introduced.
    • Shows how to perform encryptions using W3C Recommendation “XML Encryption Syntax”
  • Not really relevant to allowing data owner access control over his data in distributed systems.

Sabrina De Capitani di Vimercati, Sara Foresti, Sushil Jajodia, Stefano Paraboschi, and Pierangela Samarati. 2007. A data outsourcing architecture combining cryptography and access control. In Proceedings of the 2007 ACM workshop on Computer security architecture (CSAW '07). ACM, New York, NY, USA, 63-69. DOI=10.1145/1314466.1314477

Key Points:
  • PROBLEM: Enforcement of authorisation policies and the support of policy updates when outsourcing data on untrusted external servers.
    • Data encrypted as the data owner stores data on an external server.
    • Authorisations and encryption are merged thus allowing access control enforcement to be outsourced together with the data.
  • Relies solely on cryptography for the protection and confidentiality of data.
  • Data owner does not need to be involved in the enforcement, only to specify the policy.
  • The paper does not handle the illegal key sharing problem.

Michael S. Kirkpatrick and Sam Kerr. 2011. Enforcing physically restricted access control for remote data. In Proceedings of the first ACM conference on Data and application security and privacy (CODASPY '11). ACM, New York, NY, USA, 203-212. DOI=10.1145/1943513.1943540

Key Points:
  • PROBLEM: Restricting access only to known, trusted devices.
    • Proposes the idea of physically restricted access control where a data access can only be accessed on unique devices characterised by physically unclonable functions (PUF).
    • Defines protocols for registering a device and making an access request.
    • Presents a prototype implementation of a client-server architecture which includes the creation of a PUF.
  • Provides best level of security when data sharing as data owner can nearly guarantee that his data is being viewed by the right data consumer.
  • Lower chance of data leakage.

eHealth Journal Paper: 

- Notified by publisher that minor revision required.
- Currently working on the revision

eHealth Demo:
- Successfully coded initial phase of protocol.
- Database and web services set up with minimal functionality

ACM CCS Conference Paper:
- Started writing Abstract of paper
- Currently working on Introduction

Tuesday, 9 April 2013

7 papers reviewed:
 Adaptive Data Protection in Distributed Systems A. Squicciarini, G. Petracca, E. Bertino. Third ACM Conference on Data and Application Security and Privacy (CODASPY), February 2013.

Key Points:

  • MOTIVATION: Ensure customer's data protection policies are honored regardless of where the data is physically stored and how often it is accessed, modified and duplicated.
  • PROBLEM: Ensuring policies associated with data distributed across domain (regardless of where the data is physically stored and how often it is accessed, modified, and duplicated) are honored is an important challenge. Data in the Cloud is stored and replicated in multiple locations around the world and it is important that jurisdiction laws are obeyed but also privacy of data owner is maintained
  • CONTRIBUTION: The paper uses self-controlling objects to protect data and enforce policies set out by the data owner to be maintained.
    • Innovative policy-enforcement techniques for adaptive sharing of user's outsourced data.
    • Uses the idea of self-controlling objects (SCOs), that encapsulate sensitive resources such as images, video, text, etc and assure their protection through the provision of adaptive security policies. SCOs use Java JAR technology.
    • The security of objects stored in JARs is managed by CP-ABE schemes
  • The data is encapsulated in JAR files which makes it portable and usable in any hardware, operating system, etc that has installed the popular Java Runtime Environment.
  • When modifications take place on one computer, the SCO automatically updates other identical SCO's to contain modified data which makes for a very neat collaboration without trusting Cloud.
  • The trust level of outsiders is reduced further and combined with the simple idea, makes the solution attractive for future needs.

  • Issue: Once the data is decrypted, the user can still find where the decrypted file is contained and save a copy to be redistributed to other users. The decrypted data is not monitored for illegal operations, only the SCO.
  • The ACP needs to be better hidden.

Mohamed Shehab, Elisa Bertino, and Arif Ghafoor. 2005. Secure collaboration in mediator-free environments. In Proceedings of the 12th ACM conference on Computer and communications security (CCS '05). ACM, New York, NY, USA, 58-67. DOI=10.1145/1102120.1102130

Key Contributions:

  • MOTIVATION: Collaboration and Interoperability in multi-domain environments provides benefits but suffers security issues
  • PROBLEM: The paper is attempting to solve the problem of secure interoperability in a multi-domain environment without a mediator having a global view
  • CONTRIBUTION: Decentralises access control with the removal of a mediator to control collaboration. Access control is based on user’s access history, aka user access path. Paper uses idea of paths for secure interoperation.
    • Presents a mediator-free collaboration environment and discuss security challenges in such environment. Access path security requirements are presented for secure collaboration.
    • A framework for secure collaboration in a mediator-free environment, based on access control decisions based on user’s access history.
    • A discussion of several security attacks that can occur in a mediator-free environments and ways to mitigate such attacks.

  • Paper has good introduction. It explains the benefits of interoperability in 2 paragraphs and then discusses the problems in 2 paragraphs. The contribution and the paper organisation then follow.
  • The mathematics of the paper is a little difficult and a bit too much. However, parts of the mathematics was understandable.
  • The problem in relation to my research is that it doesn’t handle the scenario of dishonest users who may share data with unauthorised users (e.g via email attachments).

Vipul Goyal, Omkant Pandey, Amit Sahai, and Brent Waters. 2006. Attribute-based encryption for fine-grained access control of encrypted data. In Proceedings of the 13th ACM conference on Computer and communications security (CCS '06). ACM, New York, NY, USA, 89-98. DOI=10.1145/1180405.1180418
Key Contributions:

  • MOTIVATION: With the growing amount of sensitive data stored on the internet, there is concern where personal data will be compromised
  • PROBLEM: The paper is trying to solve the problem of users sharing encrypted data with other users or third parties by either decrypting data and sending to them or by sending them the private key.
  • CONTRIBUTION: A Key-Policy ABE scheme
    • A scheme where each private key is associated with an access structure that specifies which types of ciphertexts that can be decrypted according to the attributes of the ciphertexts.
    • User’s key’s access structure uses tree structure where leaves are attributes. Can only decrypt if attributes satisfy the access structure.
    • Prevent collusion of users with similar access structures
    • Provide a delegation mechanism that allows any user that has a key for an access structure to derive the key for another access structure only if the latter is more restrictive than the former.

  • Paper’s introduction discussed briefly the motivation and problem and discussed in detail the contribution.
  • Paper is relevant and relatively easy to read but at times confusing.
  • The mathematics of the paper is not understandable and very heavy. May need to do a number of follow-up readings to understand concepts.
  • Still assumes the authorised users are trustworthy and will not accidentally leak the whole data to third parties.

Philippe Golle, Frank McSherry, and Ilya Mironov. 2006. Data collection with self-enforcing privacy. In Proceedings of the 13th ACM conference on Computer and communications security(CCS '06). ACM, New York, NY, USA, 69-78. DOI=10.1145/1180405.1180416

Key Points:

  • MOTIVATION: How to protect individuals from distrustful pollster and how to protect pollsters from fraudulent accusations.
  • PROBLEM: A pollster who wishes to collect private information from individuals of a population may not be able to do so us individuals, understandably, are unwilling to send sensitive information to untrustworthy pollsters.
  • CONTRIBUTION: Bounty hunters
    • A bounty hunter service listens for leaks of private information and assembles a case against the pollster.
    • The bounty hunter participates in data collection, pretending to be respondents and submit “baits”, whose decrypted contents cannot be obtained without access to a secret held by the pollster
    • Any report of actual data in the message must have come from the pollster and hence incriminates pollster of leakage of information.

  • Paper is a good first step to controlling whether the data owner’s data is leaked from the consumer and if it is, it does not go unnoticed.

Alexandra Boldyreva, Vipul Goyal, and Virendra Kumar. 2008. Identity-based encryption with efficient revocation. In Proceedings of the 15th ACM conference on Computer and communications security (CCS '08). ACM, New York, NY, USA, 417-426. DOI=10.1145/1455770.1455823

Key Points:

  • MOTIVATION: In the setting of IBE, there has been little work on studying revocation mechanisms.
  • PROBLEM: In an ID-based/PKI-based system, users have to regularly keep in contact with PKG, prove their identity and get new keys whether their keys have been exposed or not. The PKG has to be online at all times for this.
    • Paper discusses a new way to mitigate the limitations of IBE with regard to revocation and improves efficiency of previous solutions.
    • Revocable IBE and its security models are defined and discussed.

  • May provide a good revocation scheme, however, is very limited in providing good access control and monitoring.

Amit Sahai and Hakan Seyalioglu. 2010. Worry-free encryption: functional encryption with public keys. In Proceedings of the 17th ACM conference on Computer and communications security(CCS '10). ACM, New York, NY, USA, 463-472. DOI=10.1145/1866307.1866359

Key Points:

  • MOTIVATION: The ability to send files to other users without worrying about whether they have the right to access the data.
  • PROBLEM: When a co-worker requests access to data, it is unclear whether the co-worker has the rights to access data. Although, these kinds of unauthorised accesses still occurs.
    • Discusses the need for a scheme to be secure against eavesdroppers, the need for the policy of a ciphertext to remain hidden, the user’s public key should reveal no information about his credentials, and even if the certification authority is corrupted, it should not be able to compromise the security of any honest user.
    • Suggests the notion of Worry-Free Encryption, since a sender does not need to worry about whether a recipient is authorised to obtain a message before sending it.
    • A public/private keypair is generated for each bit of the user’s credentials. The public keys will then be sent to the Certificate Authority to mask user credentials in public key.
    • The encrypter then generates a function to be sent and encrypt each part of the function under each of the user’s public keys. The user can retrieve each function piece corresponding to his credentials to reveal the function and hence reveal data.

  • Could be useful to protect data from being viewed by unauthorised users.
  • Storing a number of public/private key pairs could introduce key management complexity and is costly on user machines.
  • Once the data is decrypted, an authorised user Alice, may still send the data to an unauthorised user, Bob. Paper assumes Alice is trusted but is curious whether Bob is allowed to view data.

Mohamed Nabeel and Elisa Bertino. 2011. Poster: towards attribute based group key management. In Proceedings of the 18th ACM conference on Computer and communications security (CCS '11). ACM, New York, NY, USA, 821-824. DOI=10.1145/2093476.2093502

Key Points:

  • MOTIVATION: Current group key management schemes are not well designed to manage group keys based on the attributes of group members
  • PROBLEM: How to efficiently handle group dynamics (e.g, joining and leaving of members) and also how to defend against collusion attacks
    • An expressive Attribute-Based Group Key Management Scheme (AB-GKM) which allows one to express any threshold or monotonic conditions over a set of identity attributes.
    • Improve the performance of broadcast GKM schemes corresponding to his credentials to reveal the function and hence reveal data.

  • Although the data owner has fine-grained access control over who can view his data and is effective, the data owner does not know how his data is being used by his members (e.g, illegal transfers, etc)

Development of e-health demo:
- Finished stage 1 of coding: Initialisation
- Working on stage 2 of coding: Consumer Authorisation
- Still need to test stage 1 coding to see if it is working

Wednesday, 3 April 2013

Giuseppe Ateniese, Randal Burns, Reza Curtmola, Joseph Herring, Osama Khan, Lea Kissner, Zachary Peterson, and Dawn Song. 2011. Remote data checking using provable data possession.ACM Trans. Inf. Syst. Secur. 14, 1, Article 12 (June 2011), 34 pages. DOI=10.1145/1952982.1952994

Key contributions:
  • Using Provable Data Possession (PDP) protocol, it challenges the storage server (SSP) to check whether the data still exists.
  • Allows an auditor to check for proof of data possession in order to validate whether the server possesses the data that was originally stored by the client using Remote Data Checking (RDC)
  • Tags are generated by the DO for each block of the file and stored along with the file in the SSP
  • DO issues a challenge to SSP for random data blocks and verifies the proof to validate whether data exists in server
  • Lightweight and Robust. Lightweight since spot checking is used to check whether a random portion of the data still exists and robust since it protects against arbitrary data corruptions
  • Fixes small data corruptions
  • Like the use of providing a high-level overview of the protocol just before explaining the technical details.
  • Doesn't protect against data stealing
  • Doesn't handle dynamic operations
  • Doesn't handle the case of illegal transfer of files. It just checks for data existence

Bo Chen and Reza Curtmola. 2012. Robust dynamic remote data checking for public clouds. InProceedings of the 2012 ACM conference on Computer and communications security (CCS '12). ACM, New York, NY, USA, 1043-1045. DOI=10.1145/2382196.2382319

Key contributions:
  • Continues on the work of RDC but instead handles dynamic operations (insertions, updates, deletes) on data.
  • Uses Reed Solomon codes based on Cauchy matrices which provide communication-efficient code updates
  • Handles robustness for dynamic operations
  • Paper too technical

Lingfang Zeng; Zhan Shi; Shengjie Xu; Dan Feng, "SafeVanish: An Improved Data Self-Destruction for Protecting Data Privacy," Cloud Computing Technology and Science (CloudCom), 2010 IEEE Second International Conference on , vol., no., pp.521,528, Nov. 30 2010-Dec. 3 2010
doi: 10.1109/CloudCom.2010.21

Key Contributions:
  • Data self-destroys after a period of time by destroying the encryption key rendering the data useless
  • Handles sniffing and hopping attacks which may read and store decryption keys before it is destroyed.
  • The ciphertext still remains even if decryption key destroyed making it vulnerable to traditional attacks (cryptanalysis/brute force) to reveal plaintext

Fengshun Yue; Guojun Wang; Qin Liu, "A Secure Self-Destructing Scheme for Electronic Data," Embedded and Ubiquitous Computing (EUC), 2010 IEEE/IFIP 8th International Conference on , vol., no., pp.651,658, 11-13 Dec. 2010
doi: 10.1109/EUC.2010.104

Key contributions:
  • Electronic data automatically destroyed after a certain period of time without any user intervention
  • Does not rely on third parties
  • Resists against traditional attacks (cryptanalysis/brute force) and also attacks to the Distributed Hash Table (DHT) network which destroys the decryption key and a part of the ciphertext.
  • Encapsulates data into Vanishing Data Objects (VDOs) and later Decapsulates VDOs into data providing they are withing time constraints.

Tuesday, 12 March 2013

Paper reviews

Reviewed 4 papers.

A paper reviewed: Kayem, A.V.D.M., "On monitoring information flow of outsourced data," Information Security for South Africa (ISSA), 2010 , vol., no., pp.1,8, 2-4 Aug. 2010
doi: 10.1109/ISSA.2010.5588602
Key Ideas/Contributions:
- Prevents authorised users from illegal data exchange
- Uses an invisible digital watermark which is a hash of the encrypted data and key.
- Hash of the user’s role key and the data hash are compared before enabling data access.
- Keeps data secure from unauthorised users and the service provider

- Neat paper structure, especially the first two sections
- Prevents authorised users from transferring data to unauthorised users even when fully decrypted.
- Doesn’t provide data owner full control such as how data is to be viewed, how many copies can be made, etc.

Paper reviewed: Qihua Wang and Hongxia Jin. 2011. Data leakage mitigation for discretionary access control in collaboration clouds. In Proceedings of the 16th ACM symposium on Access control models and technologies (SACMAT '11). ACM, New York, NY, USA, 103-112. DOI=10.1145/1998441.1998457
Key Ideas/Contributions
  • Provides a controlled SaaS collaboration environment for collaboration and information sharing between different organisations
  • Uses the idea of mandatory access control policies (MAC Policy) to control data sharing among different organisations based on the organisation's code-of-conduct and non-disclosure agreements (NDA)
  • Users also have a list of contacts of which they can select users to share information with. Provided the contact satisfies the MAC Policy conditions, users can share information with other organisations without fear of accidentally leaking information to an unauthorised organisation.
  • Users may also accidentally make typos when sharing data and hence accidentally leak information to unauthorised users which may cost organisations. The solution contains a recommender algorithm which checks whether the selected user is relevant to the data based on keyword strength and if not warns the user and suggests a better candidate from the user's contacts.
  • Neat paper structure
  • Data access control mainly from business perspective
  • Business users can share data without worrying about breaking code-of-conduct and MAC Policies.
  • MAC Policies also prevent users sharing data outside the perimeter of the authorised organisation(s).
  • Solution helps prevent users from making typos when entering users names for sharing. It issues warnings and suggests the likely user based on the likelihood of data interest of that user.

  • Polices are not fine-grained enough. Does not control access based on roles, only on organisations.
  • Only protects honest users from leaking information by mistake. A malicious user may create fake keywords and share data maliciously with whoever.

A Paper reviewed: Maritza L. Johnson, Steven M. Bellovin, Robert W. Reeder, and Stuart E. Schechter. 2009. Laissez-faire file sharing: access control designed for individuals at the endpoints. InProceedings of the 2009 workshop on New security paradigms workshop (NSPW '09). ACM, New York, NY, USA, 1-10. DOI=10.1145/1719030.1719032
Key Ideas/Contributions

  • Laissez-Faire file sharing is defined by 5 properties - ownership, freedom of delegation, transparency, dependability and minimisation of friction.
  • Most users in an enterprise who have to abide by policies and strict rules on file sharing almost always subvert to sharing files through email attachments, USB, etc without the organisations file sharing system as it was too limiting and not as convenient.
  • Email attachments prevent data owner the ability to permanently delete files, prevent readers from forwarding data to others and preventing others from working on and modifying the data.
  • Highlights the need for a controlled data sharing environment
  • Highlights the reality that many people find other ways to share data (e.g email attachments, USB) when data sharing laws are too restrictive
  • Laissez Faire sharing does not prevent re-sharing of data

A Paper reviewed:
Burnap, P.; Hilton, J.; , "Self Protecting Data for De-perimeterised Information Sharing," Digital Society, 2009. ICDS '09. Third International Conference on , vol., no., pp.65-70, 1-7 Feb. 2009
doi: 10.1109/ICDS.2009.41
Key Ideas/Contributions
  • Provides access control on machines outside the perimeter of the organisation or enterprise
  • Data remains encrypted throughout its lifetime and can only be decrypted if user has access rights.
  • Parts of the document are provided access control such that certain users can only have access rights. Parts of the document are classified into categories.
  • In a document, a subsection of a document may be highly confidential whereas other sections may be publicly available. Traditionally, the whole document would be restricted to those with access rights and hence limiting effectiveness, dynamism of collaborative working. The solution allows parts of document to be protected while others are publicly available and hence effective.
  • Access control still stays in place when shared, copied, transferred, and stored on other organisation’s systems.
  • Doesn’t provide data owner control over his data. The data only controls who views the data but doesn’t let the data owner know if any other operations occur with the data that the data owner doesn’t know about, such as tampering or distributing illegal copies of the data. Hence, not enough data control.