FF-Sec

Sometimes I feel like a motherless child who stores memories in digital paintings and analog photos. - CISO for a company in Germany - likes drawing on iPad and taking analog photos - interested in too many topics - (chaos) magician and dimensional traveler - thinks . o O ( "Senior Nerd" should be a job title )

Never again Docker on macOS

If you don't want to use Docker Desktop because it's extremely annoying without a Pro account, since you always have to apply every update immediately, Docker on macOS can become a real pain in the ass. In theory, docker-machine should be an alternative. Unfortunately, it often causes problems with VirtualBox or the network configuration when using Hyperkit isn't working. Fortunately, I always have enough Linux machines available. However, it is of course also annoying when you always have to log in to another computer just to build a Docker image. But there is a simple solution for this if you have a Linux machine available.

First of all, of course, install Docker on the Linux machine. Make sure your user can use the Docker by adding it to the group docker. Then set up an SSH connection for which you don't need a password. For this create an SSH key with ssh-keygen. Add the public key to $HOME/.ssh/authorized_keys on the Linux host (create the file if it doesn't exist). On your Mac, add the private key to the SSH agent using ssh-add /path/to/private/keyfile. Test the connection by logging into your Linux machine using ssh username@ip-address.

If everything works, create a context for your Docker CLI:

docker context create myserver --docker "host=ssh://username@X.X.X.X:22"

And finally tell the Docker CLI to use this context:

docker context use myserver

Now you can use your docker command as usual and it will automatically use the Docker daemon on the Linux host.

How to create a secure password that you can easily remember

You need a secure password that you can easily remember? Do the following:

Make a sentence of at least 8 words that you can easily remember. Take the first letter of each word. Make every second letter of them a capital letter. Replace 'E' by '3', 'I' by '1', 'S' by '5', 'B' by '8', 'O' by '0' and 'a' by '@' (because th353 num83r5 l00k 51m1l@r t0 th3 r3pl@c3d l3tt3r5). Add a special character at the beginning or at the end of the character string (especially if you had no 'a', that you replaced with '@', in it).

Example: My mother always roasts a goose at Christmas and adds baked potatoes.

All first letters. MmaragaCaabp
Every second letter a capital letter: MMaRaGaCaAbP
Your password: MM@R@G@C@AbP.

Hackers will probably take a while to crack this password. 😉

The 10 Golden Rules of Lomography

Rule #1 - Take your camera everywhere you go.
Rule #2 - Use it any time – day and night.
Rule #3 - Lomography is not an interference in your life, but part of it.
Rule #4 - Try the shot from the hip.
Rule #5 - Approach the objects of your Lomographic desire as close as possible.
Rule #6 - Don’t think (William Firebrace).
Rule #7 - Be fast.
Rule #8 - You don’t have to know beforehand what you captured on film.
Rule #9 - Afterwards either
Rule #10 - Don’t worry about any rules

Ensuring Information Security - A more practical view with TOMs

Introduction

Have you ever wondered what a company in  the EU has to do to ensure information security and all the depending legal requirements? You can get a rough idea by looking at the so-called "Technical and Organizational Measures" (TOM) which a company has to implement in its processes. 

Whenever I audit our information security measures I create a list of TOMs relevant for the processes of our company. Additional to the TOMs I check the list of security assets, their risks and the implemented controls, the inventory of used IT systems and the index of data processing activities to get an overview about what I have to audit. If one of them doesn't exist, I create it accordingly. 

Here's the list of TOMs which are relevant for my current company. I use it as a checklist for my internal audits, but of course all audits start with checking if new TOMs are required because of changed conditions. By this the TOMs are improved if required.

The list of TOMs is separated into sections, which can be found in almost every modern company. I used a checklist provided by the Bavarian Federal Official for Data Protection & Privacy as a template to assemble this list, but I aligned it to my knowledge, the requirements of our company and the procedures of my own security management approach.

Technical and Organizational Measures to ensure Information Security

Management and Organisation

Insufficient security structures in an organization can significantly disturb the operational procedures. Existing professional competences must therefore be utilized. Not only the CTO but also the Data Protection Official (DPO) and the CEO must be involved in the process of implementing security requirements.

  • A suitable organizational structure for information security is in place and information security is integrated into organization-wide processes and procedures
  • Security policies and guidelines are defined, approved by the management and communicated to the staff.
  • The roles of the individual employees in the security process are clearly defined
  • The CEO is involved in the security management processes, especially in the risk management.
  • Regular review of the effectiveness of the technical and organizational measures according to the PDCA cycle (Plan-Do-Check-Act) are conducted
  • Concepts and documentation in the security environment are regularly reviewed and kept up to date
  • Depending on the size of the company: Use of a suitable information security management system (ISMS), e.g. according to ISO/IEC 27001, BSI standards or ISIS12
  • The roles and responsibilities in information security are known and filled within the company (e.g. Information Security Officer (CISO), IT Manager (CTO), Data Protection Official (DPO) etc.)
  • DPO is consequently involved in security topics
  • DPO has sufficient professional qualification for security-relevant topics and opportunities for further training on these topics
  • Regular audits of the DPO in accordance with Art. 32 GDPR on the security of processing are conducted
  • Knowledge of the responsible data protection supervisory authority as well as knowledge of the notification obligations according to Art. 33 and 34 GDPR (breach of security) is in place and documented
  • Escalation processes in the event of security breaches (who is to be informed when and how?), e.g. in emergency management, are known to all employees and documented
  • Consistent documentation of security incidents (security reporting) exists
  • Active support of the DPO by the company management is ensured
  • Insights into (new) digital threats are gathered and potential impacts on the own business are derived

Physical security of the infrastructure

Physical access to IT systems and PII must be made difficult for unauthorised persons. Serious damage caused by (natural) events such as fire or water must also be prevented as far as possible.

  • A comprehensive overall concept for facility security in general (e.g. fire protection, access restriction and control) exists

  • A concept for access restrictions and physical access control (perimeter protection) exists

  • Clear rules for dealing with visitors (e.g. companions, safety zones, visitor badges, logging, responsible staff member for visitors) are part of the concept
  • Rules for dealing with external service providers (e.g. contracts for work, craftsmen, maintenance of systems) - such as non-disclosure agreement, personal supervision in security zones or logging exist and are practiced  parts of corresponding business processes
  • Different security zones (e.g. visitors' meetings, server rooms, workplaces, research area) are defined
  • For security zones: Current overview for authorization management (Which employee is allowed in which zone?) is existent
  • For security zones: Access to security zones is restricted with suitable technology (via keys/chip cards, possibly also other factors).
  • For security zones: Self-closing doors at zone transition
  • For security zones: If applicable, signage indicating which zone should/must not be entered
  • Secure locking systems including documented key management
  • Concept for fire protection is in place
  • Usage of fire/smoke detection systems (within the framework of the fire protection concept)
  • Use of automatic extinguishing systems in server rooms (e.g. CO2 extinguishing), taking into account occupational health and safety regulations
  • Fire-retardant cabinets/ safes for storing essential components (e.g. backup tapes, important original documents)
  • The building (e.g. walls, windows) and the infrastructure (e.g. pipes, hazard detection systems) are regularly inspected and maintained
  • Fencing of the premises
  • Stable, intruder-resistant windows and doors on the ground floor (e.g. according to DIN EN 1627)
  • Alarm systems for detecting intruders, especially outside working hours, are in place
  • Deployment of security personnel (external if necessary)
  • Use of video surveillance systems in consideration of data protection requirements (monitoring of access protection)
  • Sufficient air conditioning of server rooms
  • No (openable) windows in server rooms
  • Use of equipment to ensure the power supply of server systems (uninterruptible power supply (UPS)), especially in the event of short-term power failures or fluctuations
  • Prevent natural hazards (especially fire, smoke, shocks, chemical reactions, floods, power failures, explosions and attacks/vandalism)
  • Check risks due to flooding/heavy rain, especially for server rooms in the basement or other vulnerable areas

Awareness of the Employees

Employees are now increasingly the focus of cyberattacks. Sophisticated social engineering techniques are used to trick them into carrying out security-critical actions. Employees must therefore be trained in security issues in order to defeat such attacks.

  • All employees of the company must receive appropriate training in information security and data protection as relevant to their role
  • Data protection training for new employees promptly after taking up employment
  • Regular refresher training for existing staff (at least once a year)
  • Everyone in the company is regularly informed about new developments in data protection and IT security (e.g. by email, intranet, collaboration platform, notice board)
  • Relevant guidelines, e.g. on email/internet use, dealing with malicious code messages, use of encryption techniques, are kept up to date and are easy to find (e.g. on the intranet)
  • Data protection manual (which e.g. also provides training content) is accessible to all employees
  • Training content: Selected employees involved in the detection of security breaches (such as CTO, DPO, management, executives, support) know the internal processes for dealing with incidents (including notification according to Art. 33 GDPR, emergency plan/incident response plan)
  • Employees are trained on how cyber attacks are initiated by means of social engineering (help for self-help)
  • Employees are trained about the dangers of email communication, especially with encrypted email attachments (e.g. zip file with password)
  • Employees can recognise fake emails (e.g. sender addresses, conspicuousness, embedded links)
  • Raise awareness of staff interacting with external parties, such as suppliers, on appropriate rules of engagement, policies, processes and behavior (including what data may be shared and in what form, what may be security critical)
  • Employees affected by working from home know how to use home office solutions and specific risks are pointed out

Authentication

Digital access restrictions help in everyday life. Users of IT systems and services must therefore prove their access authorization by suitable means.

  • All employees are instructed in the use of authentication procedures and mechanisms
  • Regulated process for central administration of user identities, especially for creation (e.g. new employee), change (e.g. name change after marriage) and deletion (e.g. employee leaving)
  • Assignment of unique identifiers for each user
  • Avoidance of group identifiers
  • If the usage of group identifiers cannot be avoided: Use of data protection-compliant logging of the associated user activities
  • Use of strong passwords and publication of a guideline for this - e.g. at least 10 digits for random complex characters or at least 16 digits for simpler character strings without direct usage of common words
  • Implementation of the password policy for strong passwords in the systems with user IDs as automatically as possible
  • Preventing the selection of weak passwords in applications (e.g. via policies or technically enforced via the identity & access management system)
  • Passwords are blocked after a security incident, even if only suspected, and must be renewed by the user
  • When a new user logs in for the first time or the password was reset by IT (e.g. if the password is forgotten), the user must change the password
  • Passwords must not be passed on (not even to colleagues, superiors or the IT department) - in exceptional cases (e.g. longer illness) the password is reset by IT and this process is documented
  • Informing employees that passwords must not be recorded on slips of paper or noticeboards
  • No saving of passwords in the browser without securing them with a master password
  • No multiple usage of a password for different services, unless central identity management (e.g. Active Directory, OneLogin etc.) is used
  • Do not send passwords by email (e.g. for a company account to a cloud service)
  • For local admin accounts, particularly strong passwords (e.g. at least 16 digits, complex and without common word parts, and different for each PC)
  • Use of two- or multi-factor authentication procedures for high-risk processing activities. (e.g. OTP, smart cards, USB tokens)
  • As far as possible, consistent use of two-factor authentication procedures for administrator accounts in applications
  • With two-factor authentication, the use of biometric features (e.g. fingerprint) in central systems (e.g. access control to security zone) is only to be used in exceptional cases - local storage (e.g. iPhone), on the other hand, is to be implemented more frequently
  • Automatic blocking of accesses in case of too many incorrect attempts due to wrong password: Either time-based (one hour, six hours, 24 hours) or complete (contact with IT necessary)
  • Time delay between individual login attempts (especially for applications accessible via the Internet) to make automatic online attacks more difficult
  • Display of the number of failed logins for a user who successfully logs in. Goal: Create transparency for attacks or attempted attacks that have taken place
  • Notify user about failed login attempts via email. Goal: Create transparency for attacks or attempted attacks that have taken place
  • Do not store passwords in plain text but use suitable cryptographic procedures (e.g. bcrypt with Salt)
  • Establish rules for automatically locking passwords after a security incident (e.g. change password hash so that no clear text password exists for it)
  • In case smart cards are used as staff badges, check whether they can be used for standard authentications (e.g. operating system login)
  • Default authentication information by manufacturer for software should be changed after installation

Roles/Privileges Concept

Users should only be able to access PII that is necessary for their activities. By introducing user privileges for certain roles (e.g. accounting, IT administration), different privileges are assigned to specific persons.

  • Create role profiles for the employees with reference to the entries of the processing activities index
  • Control and regulate access to information and buildings/areas in a targeted manner via the Roles and Privileges Concept
  • Establish regulations for the administration of roles (assignment, withdrawal) to employees
  • Regularly check (e.g. once a year) whether the assignment of roles corresponds to the specifications and whether the roles still meet the requirements of the business activity
  • No administrator accounts for users who do not perform administrative activities
  • Create various administrative roles (e.g. create new users, perform backups, configure the firewall) for IT administration
  • Do not use superuser (e.g. root on Linux) if possible
  • Set up two user IDs for employees with IT administration tasks: an administration ID and a normal user ID (for non-administrative purposes such as surfing the internet)
  • Establish a rule that no surfing on the Internet or reading/sending e-mails is done using administrator privileges

End user devices (clients)

The end user devices used for daily work must be permanently secured. No or only insufficient regulations usually lead to open vulnerabilities on client systems, which can then pose a considerable threat to the entire organization.

  • A device management (who uses which devices in which area?) is available
  • Automatic locking after a certain period of inactivity if manual locking cannot be guaranteed when leaving the area of influence
  • Apply privacy films to monitors and notebook screens in case of potential unauthorised viewing (e.g. in the entry area of the office)
  • Activation of a firewall that blocks unwanted services on the end device (e.g. inadvertently installed web servers)
  • Use of an anti-virus solution or an endpoint protection system with regular signature updates that are updated at least daily and regulations on how to proceed in the event of a warning message
  • Central registration of malware alerts by the IT administration
  • IT administration process plan in the event of a malicious code attack
  • Patch management concept in place ( including update plan with overview of software used)
  • Regular evaluation of information on security vulnerabilities of the software used, such as operating systems, office software and specialized applications (e.g. through e-mail newsletters, manufacturer publications, specialized media, security warnings)
  • Installation of critical security updates within 24 hours (mandatory), other security-related updates within 7 days (mandatory but can be discussed with the CISO) and all other updates (feature releases and similar) within 4 weeks (if possible)
  • PII must be stored on a storage media that is covered by the backup (e.g. network drive)
  • Limit the use of external devices to the minimum necessary through technical measures (e.g. USB sticks, smartphones, external hard drives)
  • Deactivate auto start from external media (e.g. USB sticks)
  • Remote maintenance for clients for IT administration purposes exclusively via encrypted connections after authentication by the administrator and approval by the user
  • Using only operating systems and software for which security updates are still available in a timely manner
  • Preventing the execution of software downloaded (from the Internet) whose sources are identified as unsafe
  • Access to websites should be managed restrictively so that the risk of compromise, e.g. by malware, is reduced and access to unauthorised websites is prevented (e.g. via web proxy with up-to-date blacklists)
  • Preventing the automatic execution of applications from the temporary download directory of the Internet browser
  • Applications are to be executed on the end user devices without administrator privileges, if possible
  • Establish a process for effective data deletion before an end user device is given to another employee
  • A security concept for the use of printers, copiers and multifunctional devices is in place (e.g. no unauthorised viewing of printed documents, adequate protection of stored information, proper disposal)

Mobile storage devices

The widespread use of USB storage devices, notebooks and smartphones makes regulations necessary for usage and also in the event of loss. Unprotected storage media besides that allow unauthorised persons to access sensitive data without much effort.

  • Using strong encryption of mobile end user devices (e.g. hard disk encryption, container solutions)
  • Using backup and synchronization mechanisms to prevent major data loss in case of loss and theft
  • For smartphones: Access only after authentication (e.g. PIN, password) - Length of identifier dependent on automatic blocking and deletion functions
  • For smartphones: Use of biometric access procedures only if the biometric templates are stored locally within a secure chip on the smartphone and for PII with no high risk
  • For smartphones: Use cloud storage for data backup only after careful examination of the data protection requirements (also employee data protection for "Find my Phone" functions)
  • For smartphones: Mobile device management solutions for configuring and managing the devices, the installed apps and locating/deleting them in the event of loss
  • For smartphones: Only secure sources are used for the installation of apps. Apps are tested and approved beforehand
  • Check regulations to see if it is sufficient to be able to access less data than within the internal company network when using mobile workstations (e.g. notebook on a business trip)
  • Provide anti-theft devices (e.g. attachment of lockable steel cables) for notebooks if required
  • Create regulations on private use of notebooks and smart phones - Recommendation: No private use
  • Employees know the regulations in case of loss of a mobile end user device, e.g. report the loss to the company and/or the police
  • For mobile storage devices: There is a guideline for the safe handling of mobile storage devices; Staff members are aware of this policy and are trained in the handling of mobile storage devices
  • For mobile storage devices: Secure deletion of the storage device before and after use is ensured

Server systems

Server systems must be secured with special care, as security breaches there can usually have enormous consequences due to the large amount of PII.

  • Only competently trained persons are allowed to perform administration activities on the servers
  • Set up different administration roles with privileges according to the least privilege principle for different administration tasks (e.g. software updates, configuration, backup)
  • Regulated process for the timely installation of security updates for the servers - critical updates must be installed within 24 hours
  • Consistent use of two-factor authentication procedures for applications that support this, especially for administrators
  • Disabling/uninstalling standard server services that are not required (e.g. print server)
  • Block local server services from external access via firewall
  • Check further hardening measures for the deployed server operating system
  • Disable sending of telemetry data to manufacturers unless assessed as necessary

Websites and Web Applications

Websites and web applications are usually easily accessible platforms for attacks, which can usually be well secured with known best-practice approaches.

  • Usage of state of the art HTTPS protocol (TLS1.2 or TLS1.3)
  • Ensuring access to databases is only possible for required servers
  • Remote access to web servers only with encrypted connection and two-factor authentication (e.g. SSH with client certificates)
  • Limitation of web application administration areas to specific IP addresses (e.g. VPN gateway)
  • Only trained or competent persons are allowed to perform administration tasks on the servers
  • Regulated process for informing about security updates and timely installing them, especially for common content management systems (CMS)
  • Execution of security tests on web applications according to good practice (e.g. OWASP Testing Guide)
  • No transfer of PII (e.g. mail address) via HTTP GET request, as this data is stored in the web server log files and can be extracted by website trackers
  • Separation of web server, application logic and data storage of a web application by own servers, which are integrated in a suitable firewall architecture (e.g. DMZ - Demilitarized Zone)
  • Blocking the discovery of content by search engines (via robots.txt), if this content is not required to be found by a search engine

Networks

Attacks on one's own network via the Internet are possible in many organizations. To prevent the spread of malicious code, for example, the organization's own network structure must be actively protected against such negative external influences.

  • Appropriate network segmentation: Restrictive separation of sensitive networks (e.g. HR) from administrative networks (using firewall systems and/or VLAN)
  • Deployment of a firewall at the central internet gateway
  • Blocking all services that are not required (e.g. VoIP, Peer-to peer, Telnet)
  • Use of a web proxy through which all HTTP(S) connections must pass
  • Blocking HTTP(S) connections away from the web proxy - avoid exception rules
  • Logging and blocking of IOCs (Indicators of Compromise, mostly URL and IP hashes)
  • Regular updating of IOCs from appropriate sources
  • Use of suitable firewall architectures to separate internal-only systems (e.g., workstation, printer) from servers accessible via the Internet (e.g., mail server, Web server, VPN endpoint) - Common: Concept of a DMZ (Demilitarized Zone)
  • Use of wireless access via WLAN only on current WLAN routers with effective access mechanisms (e.g. WPA-2 with at least 24-digit password, WP3-Enterprise or use of a Radius server)
  • Usage of a WLAN guest access that has no access to the internal network
  • Regulated process for proper configuration of firewalls and regular review of them (e.g., as a requirement for release procedures)
  • Logging at firewall level to detect and analyze unauthorized access attempts between networks
  • Automatic notifications to IT administration when unauthorized processing is suspected
  • Regular checking of the correct configuration of the firewall (e.g. by means of port scans for the company's own IP addresses from external sources and periodic pentests)
  • Use of sufficiently qualified personnel/service provider to configure the firewall
  • Checking incoming e-mails using anti-malware protection
  • Blocking of dangerous email attachments (e.g. .exe, .doc, .cmd)
  • Do not use unencrypted protocols (e.g. FTP, Telnet)
  • Use of intrusion detection systems (IDS) or intrusion prevention systems (IPS)
  • Connecting branch offices or home offices via strongly encrypted VPN connections with client certificate authentication

Archiving

Although archive data is no longer required for daily work, it must sometimes be kept for a certain period of time due to legal retention periods. It must therefore be ensured that the PII it contains is protected.

  • Establish regulations on which data must be retained on which legal basis and the length of the retention period (Storage, Locking & Deletion Guidelines)
  • Define access to archive files: Document, implement and check
  • Archive data must be effectively deleted after the retention period has expired
  • No archiving on storage devices that are unsuitable for long storage periods (e.g. rewritable DVDs)
  • No storage of archive data in productive databases, but transfer of archive data from productive systems to the archive systems
  • Encryption of archive files with suitable key management: store decryption keys in at least two (locally) separated locations
  • Suitable data formats for archiving documents were selected to ensure long-term readability of the data

Maintenance by Service Providers

The activities of external IT service providers, especially during maintenance, must be monitored and documented. In order to prevent unintentional disclosure of data, PII must be carefully deleted from the hardware that has been taken out of service.

  • Recording of all activities of external service providers
  • Include an NDA in the service contract or have the external employee sign it
  • Define internal employee who monitors (or, if necessary, accompanies) and documents the activities of the external service provider
  • Create regulations for effective data deletion on hardware (e.g., PCs, printers, smartphones) that is taken back by the service provider or manufacturer (e.g., in the event of defects)
  • When using remote maintenance software, regularly apply security updates and pay attention to information about known vulnerabilities or misconfigurations
  • Log remote maintenance by external service providers and limit access only to the system being serviced - if possible, track digitally by an employee on the screen of the serviced system

Logging

By means of suitable logging, security breaches pursuant to Article 33 of the GDPR can also be detected and processed retrospectively. Without a list of user activities, however, it is usually not possible to make a valid assessment of whether and to what extent unauthorized data access has occurred.

  • Create a concept for logging user activities, technical system events, error states, and Internet activities, taking into account data protection requirements (including protection of employee data)
  • Log files are stored on a dedicated logging system (e.g., a central logging server)
  • The clocks of the information processing systems used (PCs, notebooks, etc.) should be synchronized with appropriate time sources to enable targeted analysis during security events
  • Compliance with the purpose limitation of the log files must be ensured: The personnel representative committee must be involved if necessary
  • Regular analysis of log files without any reason to detect unusual entries - preferred: automatic heuristics

Business Continuity

The availability of business processes and the associated IT systems and data must be guaranteed. Within the framework of the backup concept, it is therefore important to ensure an orderly interaction when restoring stored data in order to remain operational in the event of an emergency.

  • Emergency plan for business continuity: regulations on: which systems are to be restored in which order, which persons/service providers can be consulted in the event of an emergency, and which reporting obligations exist
  • The emergency plan is regularly reviewed, e.g. through tests and emergency drills
  • Existence of a written backup concept
  • Execution of backups according to the 3-2-1 rule: 3 data backups, 2 different backup media (also "off- line" like tape backups) and 1 of them at an external location
  • Suitable physical storage of backup media (e.g. safe, different fire protection zones, etc.)
  • Regular verification that at least one backup is performed daily
  • Regular tests to ensure that all relevant data is included in the backup process and that the recovery works
  • At least one backup system cannot be encrypted by malicious code, e.g. special data backup procedure such as pull procedure of the backup system or air-gap separated (offline) after completion of the backup process
  • Avoid macros in Office documents as far as possible in day-to-day operations to protect against ransomware
  • Permitting only signed Microsoft Office macros or (regular) information, e.g. once a year, to employees about the risks of macro activation (e.g. in Microsoft Word)
  • Prevention of automatic execution of downloaded applications (e.g. software restriction policy and sandboxing)
  • Disable Windows Script Hosts (WSH) on clients (if it's not required) or check if the restriction of Powershell scripts with the "Constrained-Language Mode" on Windows clients is feasible or use a web proxy with (daily) updated blocking lists of malicious code download sites (IOCs)
  • Emergency plan includes dealing with encryption trojans / ransomware - this is also available in paper form
  • Review backup and recovery strategy that ensures backups cannot be encrypted by ransomware

Cryptography

The confidentiality, integrity and authenticity of data, systems and entities can be ensured using state-of-the-art cryptographic procedures.

  • Rules for effective usage of cryptography, including key management, should be defined
  • Hash methods can be used to achieve the integrity of data, software and IT systems - state of the art methods include SHA-256, SHA-512, SHA-3, bcrypt, Blowfish
  • Password storage only with "normal" hash functions (e.g. SHA class) if password is at least 12 digits - Use of salt values as protection against entry in available databases (rainbow tables)
  • Password storage with salt according to the state of the art with e.g. HMAC/SHA256, bcrypt, scrypt, PBKDF2
  • State-of-the-art symmetric encryption with e.g. AES-256 with CBC/GCM mode
  • State-of-the-art asymmetric encryption with e.g. RSA-2048 bit (or higher)
  • Effective key management (generation, distribution, locking) is essential when using cryptographic methods
  • Protect secret keys with strong passwords of at least 16 digits. In the case of high risk, consider using HSM (hardware security modules) / hardware tokens
  • Obtain SSL certificates from trusted certificate authorities
  • Use HTTPS according to the state of the art (e.g., at least 2048-bit RSA, Perfect Forward Secrecy, HSTS, client certificates if necessary)
  • No usage of cryptographic methods with known vulnerabilities or key lengths that are too short, e.g. DES, 3-DES, MD5, SHA-1 - if legacy systems still require these, perform an individual risk analysis

Data Transfer

Both the exchange of data with other entities via electronic communication networks and the physical transport of mobile storage devices and documents must be secured in such a way that the confidentiality and integrity of the PII is not compromised.

  • Rules must exist for all types of data transfers both within the organization and between the organization and other parties
  • Especially for cloud services, procedures for usage have to be established (including a possible exit strategy to reduce dependencies on individual cloud services).
  • Encryption of mobile storage devices (such as DVD, USB sticks, hard disk) according to state-of-the-art technology
  • For email, cloud platforms: Transport encryption of PII according to the state of the art for normal risk
  • For email, cloud platforms: Transport encryption and content encryption of PII according to the state of the art for high risk
  • For Messenger: transport and content encryption of messages and files
  • Ensuring the integrity of PII through digital signatures, at least in the case of high risk
  • For HTTPS: Use of client certificates to prove authenticity for a closed user group
  • Encrypted usage of DNS services (DNSSec, DNS-over-TLS)

Software development and selection

Data protection and security must be taken into account at an early stage in the development of one's own software systems or in the selection of software products in one's own business.

  • Relevant employees are trained and know that security-by-design (ensuring confidentiality, availability, and integrity) as a subset of data-protection-by-design is a legal privacy requirement and has impact on key design decisions (product selection, centralized vs. decentralized, pseudonymization, encryption, country of a service provider etc.)
  • Production system is separated to development/test system
  • Restrict access to the source code when developing software
  • PII or access credentials aren't stored in source code management
  • System and security testing, such as code scanning and penetration testing, must be performed regularly
  • Sufficient test cycles are considered
  • Continuous inventory of the versions of software or components (e.g. frameworks, libraries) as well as their dependencies exists
  • Standard software and corresponding updates are only obtained from trustworthy sources
  • It's ensured that an ongoing plan exists to monitor, evaluate, and apply updates or configuration changes for the life of a software application

Data processing on behalf

Service providers handling personal data in the context of processing on behalf require appropriate safeguards to also ensure the security of the processing.

  • Use only service providers who can provide the guarantees (in the form of written documents)
  • Security measures according to Art. 32 GDPR as part of a data processing agreement must fit the service - the level of abstraction of the measures is sometimes slightly higher than for internal TOM lists of a controller
  • The effectiveness of the guarantees can be demonstrated (to some extent) by suitable certifications - e.g. ISO 27001 for data centers with physical security scope is usually meaningful
  • An on-site inspection by the person responsible must not be excluded
  • The processor may not include any other subcontractors without informing the client - the client then has a right of objection
  • The processor must have processes in place to detect data breaches and report them without delay to the controller as defined by the GDPR
  • Transfers to insecure third countries may only be possible with additional technical protection measures, primarily the use of cryptographic processes
  • Data is effectively deleted in the case of processing on behalf (at the latest) after the end of the contract
  • Details of the deletion technology can be provided if required
  • Periodic review of the processors regarding security practices and service delivery


Art and how it affects my life

After divorcing my ex-wife, I met my current girlfriend. Her drive in life is largely determined by art. Whenever she comes in contact with any kind of art, she explodes in a ball of energy that inevitably carries you along. And I love it to get infected by this energy, too. Sometimes we sit together in a room, doing our own stuff... she's drawing, I'm working. But sometimes I put my work aside, pick up my iPad, start drawing and then it happens... I swim through the waves of creative energy along with her.

It's such an impressive feeling, when the world around me slowly fades away and only me and my drawing still seem to exist. And after some hours I "wake up" from that trance, looking at the screen and think: "Wow, I created that?" This feeling is so unique and yet it also brings back memories of my childhood every time. Because as a child I loved to draw. However, in the course of my career in the IT industry, I stopped creating fine art. It's not that I had no connection to art anymore. I discovered art in software design, and the unfussy beauty of mathematics always excited me. And yet it's something else to paint pictures.

Interestingly, I have also reconnected with my children through art on a deeper level. They enjoy it when they can be with me and my girlfriend and paint pictures. And I enjoy it, when I see their colorful drawings, which give me a deeper insight into their world. 

It's so impressive to see how much magic is in their view of the world. And that, in turn, makes me rediscover the magic in my world. For a long time I was looking for the magic in myself. I worked with different magical systems, like Enochian Magic, the system of the Order of the Golden Dawn, with Chaos Magick, Tantra, Ice Magick and many more. But it took art as a key to realize that magic is actually all around me. And with this discovery, I also found many things again that I had already lost in my childhood. I rediscovered the beauty of nature and discovered natural mathematical logic. I learned (again) not only to see people, but also to feel them. And I also realized that it is my connection with this world that makes the real magic possible. Suddenly, many things from Chaos Magic and what is commonly called Tantra (I don't like this term because it was and is quite abused by the New Age movement) finally made sense.

And suddenly I'm living in a whole new world. My relationship works, not always without conflict, but always with the possibility of solving these conflicts together. With my children I have a connection like I never had before, although I can no longer live with them (but I see them every day). I enjoy my job again. And many things that I used to see as a problem, don’t affect me that much any longer.

I can therefore only advise everyone to open up to art when they find themselves in a situation where there seems to be no way forward and no way back. Art can be a key to look at the world from new angles and find new ways.

IaC - why you should(n't) use it

Yes, I hate IaC (Infrastructure as Code)... and I love it... sometimes.

Of course there are a lot of advantages of IaC. It makes infrastructure reproducible (partially), auditable (partially) and by that... easier to control (partially). But you should ever take a closer look if it's really useful for your company. In fact it's not useful for more or less static infrastructures. If you don't use a server network with more than 100 servers or if you don't use a constantly changing server network, IaC is for sure not for you. If you run only some dedicated servers in a data center for your website and email and perhaps and OwnCloud or similar, IaC is definitely not for you! Why?

There is no tool, that fits your needs

First of all you'll never find a tool that fits all your needs. In the end you'll use a bunch of tools, for example Terraform to provision your virtual machines, Ansible to deploy and update your software, you build VM images with Packer and if you work with cloud environments like Kubernetes you'll also use tools like KOPS and of course Dockerfiles. In the end you have to manage more software than you had to manage before.

More Version Conflicts

"But software management with IaC is much easier!" No, it's not! Because there is another problem: version conflicts. Never expect that the new version of a tool is downward compatible to the version you use. It's more realistic to expect broken state files if you update your tool and any of your team members is still using the old version of this tool. And it's also more realistic to expect, that you have to rewrite some if your IaC code after updates, because of incompatible parameters and similar. So you have to manage more software than you had before. Additional to your tech stack you now have to manage your IaC tools. Congratulations! Perhaps you had a tech stack, which consisted of Webservers, DB servers, Caching layer, Load Balancers and perhaps some security-related tools and a CI/CD suite (in best case everything in containers), and now you have Terraform, Ansible, KOPS and Packer, which also will cause version conflicts. What an advantage! ;)

Collaboration?

One of the biggest pro's of IaC should be a better collaboration. This may be correct, if you implement very strict guidelines about how to use the IaC suite in your infrastructure team. If you don't, you'll end up with a bunch of cruft code, non-reusable "modules" and isles of knowledge, where some of your team members understand only parts of your IaC infrastructure.

If you decide to use IaC, never forget, that other departments or external partners of your company may collaborate with your infrastructure team. And the other departments are presumably not involved in update management or your external partners may never have worked with IaC tools before. Congratulations! You now have some additional problems in your company. Your infrastructure team must integrate the environments, which were build by external partners into the IaC infrastructure and your developers will have only a partially understanding of your infrastructure, because your sysadmins think, that IaC is enough documentation.

Time-Eaters

Never forget, that IaC consumes a lot of time. It consumes time not only while setting up the infrastructure. It also consumes time when you change anything to your infrastructure. You need a little change to your infrastructure, like a new VM instance? Ok, what would be the "classical" way?

You login to your cloud/hosting provider and start a new server instance.

You install your software on this new instance.

Perhaps you add it to your Load Balancers.

Done.

What is Terraform doing?

You write your new infrastructure definition.

It checks if the syntax of your IaC code is correct. -> You fix your code.

It checks your state file. -> Hopefully, there are no broken state files, else your sysadmins spend the next hours to fix it.

It checks your infrastructure if everything is compliant to the expected state.

It tells your sysadmin what it will do in this run.

Your sysadmin has to confirm it. But in most cases he will check why some of the changes are needed and he must coordinate the changes in your cross-team environment.

It will do all the changes. You can not simply skip some of them without changing your IaC code.

In the end you'll need an hour for a process that would typically consume 3 minutes, because your sysadmins have to change IaC code or coordinate unexpected or unwanted updates.

It becomes more time consuming, if you decide to integrate IaC in your current infrastructure. The typical way would be to setup a completely new environment, that is completely managed with IaC. I did this process 3 times now. It ever bound all resources of the infrastructure teams for several weeks. Please don't expect, that your current infrastructure is still managed by your sysadmins. They have enough to do with your new IaC-based infrastructure and no time for "the old stuff". So you have to expect, that your old infrastructure is not updated until you move to your new "modern" IaC-based infrastructure, except you add some additional sysadmins to your team.

The first time I did an IaC integration was in a time, when tools like Ansible or Terraform were not available. So we wrote our tools be ourselves... with Perl. We invested most of our free time in coding but in the end we had a tool, that we called "ASP Tool" (Application Service Providing Tool). It was perfect, because it was designed for our very specific infrastructure, consisting of classical webserver environments (LAMP stacks), some in-house developed search engines and some very project-specific software. And it was perfectly integrated in our CI/CD environment. Furthermore it did only the changes, which were defined in the state files. Only an additional parameter triggered a check if our infrastructure was compliant to the state files. So we could do changes nearly as fast as we would have done them manually. This is not possible with new tools like Terraform, Ansible, Puppet or Chef, because they will ever check the state of your complete network. Sure, you can split your infrastructure into multiple repositories, but in that way you'll end up with a lot of repositories, where (hopefully) only your security department will have an overview.

If you use "modern" tools like Ansible, Terraform or Chef, they are never designed for your network. They must meet the requirements of a lot of different environments. This is a nearly impossible balancing act and the reason why you'll end up with a bunch of tools.

Auditability?

Another advantage of IaC should be a better auditability of your setup. This is absolutely correct... from the view of your infrastructure team. But have you ever asked your developers if they understand the infrastructure if they only have the IaC code available? Have you ever asked your security department if IaC helps them to see if all of your security requirements are met?

I can say from my perspective, that at least your security department will clearly answer with a "No!". Most of the IaC tools in the market don't track all manual changes. You load a kernel module that is not defined in your IaC? IaC will ignore it, because IaC don't track it. You change a configuration that is not defined in your IaC, because your sysadmins used the default settings? Your IaC tools will not see it. Why? Because this tools only track what you tell them to track. If you don't import all of your configurations and expected system states into your IaC environment, IaC can not help you with auditability. In the end your security department will use a semi-intelligent intrusion detection system to audit the systems. Congratulations! Not IaC helped you to audit your systems, but your IDS does. Oh wait... you could install this IDS also without IaC and the setup would only consume half of the time.

When IaC really helps

Of course there are reasons to use IaC. But you should inspect your environments and your requirements before you decide to use it. If you answer most of the following questions with "No", you shouldn't use IaC:

Do you often change your tech stack?

Is your environment highly scalable?

Do you regularly start completely new environments?

Do you use auto-scaling environments or plan to use them? (If yes, also ask, if you really need auto-scaling and how much can it save.)

In fact I've seen only a few environments within 20 years in my job, which really needed IaC. One was in an agency, that had to set up new environments for new customers every few days. Another one was in a server network consisting of several hundred of servers.

If you had a more or less static environment in the past, where you only add some additional servers every view weeks, you don't need IaC. Your sysadmins will be much faster in setting up new servers, if they don't have to use IaC for it. If you work in an environment based on Docker/Kubernetes/Cloud, you can scale your environment with some simple changes to manifest files and you can add additional nodes to your cluster with basically a single command on the new node. If you use auto-scaling groups on AWS but your tech-stack is not constantly changing, you don't need IaC.

IaC also (partially) helps, if you must prepare your network to move to another hosting provider. If your hosting provider's datacenter is destroyed and you must move to another provider within some hours, IaC is a big advantage. It mostly abstracts the API layers of the providers and by that makes it possible to setup your infrastructure from scratch very fast... as long as your IaC code is prepared for it and you stored backups outside of your current provider. In fact most if IaC infrastructures are not prepared for such use-cases and by that they are mostly useless in such situations.

You need IaC if your tech-stack is very flexible, for example if your developers play around with new technologies every view days or weeks. You need IaC if you add additional servers every day or week. You need IaC if you have a very big server network, i.e. >100 servers. You need IaC if your infrastructure team consists of >5-10 employees (presumed, that you'll also implement guidelines on how to use IaC). In all other cases you don't need IaC. IaC is only another hype, but in the end it's only needed for very flexible or very big environments. And this doesn't apply to most of the small- and mid-size IT companies.

What you should consider

If you decide to use IaC you should consider some points:

  1. Set very strict guidelines on how to use IaC. Especially the reusability of modules is a big pain point in most of the companies. If a module is not reusable it's not a module! And it must be tested if a module is really reusable!
  2. Provide additional documentation. Even if your sysadmin thinks that the code of your IaC is enough documentation, ask for data flow diagrams, documentation of the repository content etc.. It will save a lot of time for your developers!
  3. Track and calculate the time, that is needed by your sysadmins for IaC. If they need longer than with a manual setup of the servers, stop it immediately!
  4. Ask other departments if they still understand your environment. IaC is an additional abstraction layer, that may be confusing for other employees, as long as they don't have additional documentation (see 2.).
  5. Ask your external partners, if they can work with the tools you use. Else your sysadmins will spend a lot of time to integrate the setups of your external partners into your IaC.
  6. 6. Do a cost calculation. If the time saved by IaC doesn't significantly exceed the time your sysadmins normally need to set up new servers, it's not worth to use it.

Conclusion

If you really decide to use IaC, you should do some simple calculations and think about some points.

How much time will it need to integrate my current infrastructure into IaC?

Which tools has my infrastructure team to maintain additionally to my tech stack?

What are the costs for the time your sysadmins need to integrate your infrastructure into IaC and how much time to they really save with this step?

Is my server environment really highly flexible and/or scalable to justify the high costs for the setup?

Who will manage the current environment until the switch to the new environment can be done?

And don't forget to evaluate the tools before you use them. The biggest chaos arises, when you use tools which doesn't fit your requirements. "This tools is cool" from the mouth of your sysadmins is not evaluation!

The 4 Levels of IT Security

In principle, protection of IT systems can be separated into 4 levels. These are prevention, detection, assessment and response. In the proper combination, they can secure the IT platform of any company as much as it is possible to do.

Prevention

The area of prevention is probably the most comprehensive area for IT system security. It can be fundamentally separated into data protection and system protection. Data protection includes things like data encryption, transport encryption, backups and even access protection to IT systems. System protection, on the other hand, involves things like hardening systems or patch management. Unfortunately, too many companies still focus exclusively on the area of prevention when it comes to securing their IT systems. This then makes forensic work more difficult if a system does get breached. Because...

Detection

It's no secret between hackers that there is no such thing as 100% security for IT systems. A simple bug can already provide an attack vector. And in most cases it takes at least a few hours until a suitable patch is available. In addition, there are exploits that are only passed on by hackers under the table, so that it sometimes takes days or even weeks for the gap to become known. Remember the Exchange bug some months ago? It was known to intelligence agencies long time before. So it's necessary that anomalies in an IT system are detected.

This is where detection comes into play. It includes firewall systems that validate traffic, as well as intrusion detection systems and prevention systems that detect anomalies in the systems themselves. Furthermore, modern intrusion detection systems are also capable of analyzing statistical data from the systems and using this to detect unusual processes. Good defense systems can also block typical attacks (like bruteforces for example) and thus already prevent worse.

Assessment

However, even the best intrusion detection systems and firewalls can falsely identify processes as attacks. They are already quite good at automatic assessment and are also getting better and better, thanks to artificial intelligence. Nevertheless, control by a human should also take place. The evaluation should therefore never be left to the automated systems. Automated assessment should only ever be a part of the assessment process, using alerting to draw attention to the fact that an unusual process has been detected in the system. A human review is always required.

Response

Depending on how the assessment then turns out, a reaction is, of course, necessary. If an attack is detected that is still in progress, appropriate defensive measures should be taken. If a system has already been successfully compromised, a forensic investigation must be carried out to find out how the attack took place and what data was manipulated or stolen. Furthermore, a report must be made to the Data Protection Official and, if necessary, to the responsible authorities. It is usually not a bad idea to inform the customers, because experience shows us that data breaches are always published somehow. It's better if the company keeps control over the publishing.

If you implement this 4-level-model in all IT systems and your risk management and have appropriately qualified employees, you can at least assume that attacks will not go unnoticed and that in many cases they can be averted in good time. In today's world, this is vital for companies. After all, in addition to high fines and possible lawsuits, the loss of image is often difficult or impossible to repair.

Why we don't use Getstream - or why more privacy means more problems

While developing our mobile app, we also search for a service provider to provide text chat to our users. The solution from Getstream.io looked very nice and a test implementation showed, that it worked perfectly for our needs. As usual I searched for a Data Processing Agreement (sometimes also called Data Processing Addendum, especially if it's based on the so-called Standard Clauses) because Art. 28 GDPR requires it if private / sensible data is processed by a third-party provider ((also called processor) on behalf of the data controller. 

I found a good documentation of their security on their website. But I couldn't find a pre-signed data processing agreement or similar as it is provided by most other companies and even by Google and Microsoft. So I opened a ticket and asked for a contract because as a Germany-based company we have some special requirements for such agreements.

The GDPR defines in Art. 28 par. 9. that the contract "shall be in writing, including in electronic form". Ok, it's on their homepage and it's in electronic form. But § 128a of the German Civil Code defines the requirements for an electronic form, because this is not defined by the EU, but is the responsibility of the individual EU members. And there is clearly stated "If the legally required written form is to be replaced by electronic form, the issuer of the declaration must add his name to it and provide the electronic document with a qualified electronic signature.". A text on a website doesn't comply to this requirement.

The answer to a request to the Governmental Data Privacy Official of Bavaria also said, that at least a fixed format and an evidence that both sides (the processor and the controller) have agreed to it is required. So what companies in Germany basically need is at least a write-protected PDF file and an email where the third-party provider writes that this his Data Processing Agreement. 

Unfortunately the support of Getstream answered to my request, that they don't provide a DPA if we aren't customers of their Enterprise Plan. Okaaaay... we, a small startup from Germany must have >100k users before they give us a DPA? Really? I think it's clear that we will not continue with their service. The risk of transferring data of our users to a third-party without the legal basis is no risk that we're willing to take, neither I nor our CEO. Is it really that hard to create a PDF from their website and send it to us via email? 

But it was the only third-party provider in my whole career who bound a DPA to a specific plan, user level or similar. Even small startups from USA send us a DPA if we explain them the legal requirements and point out the laws we have to comply to. Most of them even sign it. On the other hand it also shows what problems the strong privacy laws bring to companies in Europe.

My privacy toolkit

As someone who always has an eye on protecting private data, not only for our company's customers, partners and employees but also for myself, I've looked at some tools over the years and some of them I made part of my daily workflows. But before I talk about some of them, I want to make clear some points.

First of all, I'm an Apple user. And I am by conviction. I'm not an Apple fanboy who needs always the newest iPhone, iPad, Watch and Mac and sleeps in front of an Apple store to be the first one who can by the newest models. But I work with macOS (and former OSX) for around 10 years now without ever regretting the switch from Linux. Of course my smartphone is an iPhone and my tablet is an iPad Pro. To be honest, I never had so few problems on a Linux machine or an Android device as I have on my Apple devices. Of course, there was never a problem on Linux that I couldn't solve by myself, but if I look back to that times when I used Linux as a desktop system, I see a lot of lost hours that I spent with fixing errors and unwanted behaviors. Since I'm using Macs I never faced similar problems again. And therefore I'm a convinced Apple user. In general macOS is a BSD-like system and I like how smooth the different Apple devices work together. However, for this reason the software I’ll talk about here will be mostly for Apple devices. But some of them are also available on other systems and platforms.

Secondly, I would like to note that my software recommendations are purely subjective. I don't claim that this is actually the best software available for specific tasks. There may be better ones, but the ones I'm going to write about here I just particularly like. Therefore comments like "But XYZ is much better than ABC, because ..." are completely useless. If you want to recommend some software to me, give me facts to compare, not opinions.

And last but not least, I'm fully aware of that I'm using paid software where I could use OSS alternatives. But there are reasons why I prefer the commercial software to open source software. In my experience, software that you have to pay for is mostly better than available OSS alternatives, at least on macOS. 

But now... let's begin...

Email Encryption

I couldn't live without email encryption. Unfortunately this is a task that most Linux machines can do better with free / open-source software than a Mac can do. But there is a solution, that is called GPG Suite. It's not free but worth every cent. It adds GPG / PGP encryption to the Apple Mail application and as soon as you created (or imported) a key for your email address(es), you can use encryption and email signing with a simple click. Your keys and the public keys of your contacts can be easily managed in a keychain-like interface, the GPG Keychain. Also importing keys for specific recipients is easy with the keyserver search that is integrated in the GPG Keychain tool. In addition I'm also using an email provider who provides GPG / PGP even in their webmailer. If you ever thought that email encryption is complicated, try Apple Mail with GPG Suite. You only need to understand: The public key is used to encrypt a message, the private key is used to decrypt it. That's basically all you need to know. Of course, this also results in the fact that you never give your private key to another person, because only you should be able to decrypt a message that was encrypted by somebody with your public key.

Taking Notes

Before I found Standard Notes, I used Evernote. And whenever I wanted to secure some data / notes, for example serials, I encrypted it with the GPG CLI tools before I added them to a note. I simply couldn't trust the builtin function for encrypting notes and I wanted to make sure that Evernote wasn't able to read my private stuff. I'm all the happier to have found Standard Notes. With this tool my life became easier, even if I still miss some features and some of them may never be implemented. But in general I like the idea of Open Source, and for me it's also ok if an open-source projects provides paid features. Good work should give a good income, in my opinion. And therefore I pay to be able to use the available extensions. In addition I also added a sponsorship for the project on Github. It's only 5$ per month, but if more people would do it, such projects could develop faster and their developers would have an easier life. I also support other projects and some artists in a similar way.

2-Factor / Multi-Factor Authentication

What I expect from my colleagues at work I also use in my private life, at least if it's related to information security. This also means, that I also use 2FA/MFA authentication wherever it is possible. But I'm not a fan of purely software-based solutions like the Google Authenticator app, Authy or the builtin 2FA from 1password and similar tools. I'm using a Yubikey from Yubico. Yes, I know the controversial discussions around Yubico. But in general I don't think that you can trust any piece of computer hardware on our planet. Especially U.S. intelligence manipulated hardware too often in the past (what was leaked later by whistleblowers) and by that we should always be skeptical if we buy new hardware, no matter what type of hardware it is. 

In the end my Yubikeys make my life easier and they prevent at least that hackers can log into my accounts even if they get my username and password. What I especially like is the availability of an authenticator app, that reads and stores your account informations on the Yubikey. By that I can use the same authenticator OTP, no matter which device I'm currently working on. As soon as I insert my key or use NFC to connect it to the app on my computer/smartphone/tablet, I see the same accounts in the authenticator app. This app is especially helpful because not all platforms with 2FA also support hardware tokens. Often they provide only an authenticator app interface. In addition I can use my Ubikeys to unlock my computers without entering my password. It needs a little tinkering to make it work on macOS, but the looks I get if I unlock my laptop with a key like other people unlock their cars are priceless. 

Collecting Informations

Anyone who has ever had to do more complex research work knows how quickly you have large amounts of unstructured data that you can quickly lose track of. And sometimes you have to do research work that shouldn't be shared with other people like the employees from your service provider. For that reason I'm using a tool called Yojimbo from Barebones (who also provide the popular BBEdit). It's not the newest piece of software on the market, but I like the idea, that you have a kind of drawer on the side of your desktop where you can simply drop data that you want to preserve. Such data can be a document, an URL, a serial number, a piece of text and so on. Later you can give the data tags to make them easier to search but the builtin search is also very good in finding informations based on their content. However, tags allow to create topic-specific lists of your collected data that can be easily accessed from the left side of the Yojimbo window or from the drawer. If you have any sensitive information, you can encrypt (and decrypt) it with a simple click. Yojimbo is not using any central server to sync data between devices. The data always stays on your computer or your iCloud. And by that you keep control over the data you organize in this tool. 

Translations

If you do research work in the WWW, you'll often find informations in languages you cannot speak or understand. Online translators like Google Translate are helpful to extract at least the essence of a text. A little insider tip is a new translation tool from Germany, DeepL.com. Currently, new languages are added from time to time. But what is particularly striking is that the translations provided by DeepL are far better than the translations provided by other providers. What does this have to do with privacy? If you have a Pro account for DeepL, they don't store your inputs, if you don't allow it in your account settings. And because it's a Germany-based company, it's not so easy for them to store PII that users may input. Our privacy laws are very restrictive and the penalties companies must pay if they process data for purposes to which the user hasn't consented can be very painful, even for bigger companies. So if you need to translate a private text that nobody should have except yourself, use a Pro account on DeepL. The translations may be not perfect (for example the German word "Datenträger" is wrongly translated as "data carrier" instead of "storage device/s" or "medium / media"), but they are much better than the results from Google for example.  And Google will always store your input for further purposes that you'll never know. 

Communication

A lot of digital communication is done via instant messengers today. Unfortunately this means, that the providers of such messengers are mostly able to read your conversations. There are only some exceptions like Signal, but even they have not the best privacy, because they can connect data about you with your user account. A new solutions comes from Switzerland. It's called TeleGuard. Even if the functionalities are very basic until now, the company behind this messenger (Swisscows) is following a very strict privacy policy. They don't store any informations about your chats on their servers, except you allow them to store backups from your conversations, that you can use to restore them on new devices you want to use. But even that backups are encrypted and decrypted only on your devices. The belonging key never leaves you. However, this also means that you can lose your data completely if you forget the password you used to encrypt the data or to access your account. In this case, even Swisscows will not be able to recover your data.

Btw, Swisscows also provides a search engine that is also focussing on privacy (in addition to child protection, why you cannot find stuff like porn with it). If you need a child-friendly search engine with privacy focus, give it a try. 

Data Storage

Data storage is, in my opinion, one of the most critical infrastructure that we use in our daily life today. Of course we can rely on solutions like Dropbox, Google Drive or iCloud, but in the end we don't know what the providers really do with the data we store on their platforms. An easy solution is to use your own servers. You can either host them at home (an old laptop with an additional external harddisk does fine for most requirements for private use) or you can rent servers in a datacenter that are fully controlled by you. But be careful: never try to operate a server if you don't know anything about system administration. If you rent a server you're also responsible for what is done with that machine. And if a bot is injected to your server and you don't notice it, it can become very expensive if the bot causes any harm to other IT infrastructure. In such a case you should ask somebody with the respective knowledge to setup and manage the server for you. 

Another alternative comes from my ISP, the German Telekom. They provide a cloud storage called "MagentaCloud" with a moderate pricing (500 GB for 5 € / month, for example). And because it's a Germany-based company with their data centers only in Germany, they can provide a very good privacy, because even German intelligence services cannot access the data without the consent of a German court. And also the options for accessing the storage - rsync, scp, SFTP, WebDAV and their apps and the web interface - are good and sufficient for my purposes. An additional security layer can be added by using a tool like Boxcryptor to encrypt all files. 

And all the other stuff

Sometimes I reach points in my work, where available software is not meeting my requirements at all. In such cases I often write small scripts or tools with Python, Go or Perl, that do exactly what I need. If you have the time, I can only recommend to learn any programming language. This enables you to write your own software, if you have specific requirements or special tasks. For example, I built a small software in the past that helps me to analyze the compliance of cloud environments used by our company. I can simply add the required access keys and the software can run in the background until it creates a final report for me, that I can use in different security management tools. But often it are simply one-liners. For example you can Base64-encode the content of a file with a simple: perl -MMIME::Base64=encode_base64 -e 'print encode_base64 join"",<>' < myfile.txt 

And before you ask... my preferred editors for programming are Sublime Text and Emacs with Spacemacs

So that's basically my privacy toolkit for my private life. At work I have, of course, much more privacy-related tools, like software for risk management, to create a data processing activities index, manage security incidents and much more. Maybe I'll write something about my "CISO toolkit" sometime in the future. In the meantime... live long and prosper! 

100 days Challenge

Writing every day is nearly impossible for me, at least at the moment. Too much work has to be done before the launch of our new app. On some days I work around 16-18 hours. Around 7 days left until everything has to be compliant to European and German laws. Nevertheless, I‘ll still try to write some short lines every day.

In around 4 hours I’ll have my next meeting. So I should call it a day and try to get some sleep. Of course this is not an all time condition but rather an exception. After the launch my life will become a little bit more relaxed again.

Preparing the launch of a new app from the (data) security perspective

Our company is currently preparing the launch of a new mobile app. This means stress throughout the whole company. Marketing has to prepare all the campaigns and their tracking, the associated website has to be designed and tested, the management is constantly in contact with various agencies and the funders and that the developers hardly have a quiet minute, you can imagine for sure.

I'm in the exciting position to be able to be involved everywhere. After all, there is hardly any department in which data protection is not somehow involved. Not only does the data of future users have to be stored and processed in accordance with the legal regulations on data protection, for which I check the server setups and make sure that things like encryption are implemented properly, also things like the terms of service, the privacy statements on the website and in the app, contracts and NDAs with external service providers, or how our customer support should handle requests from users in the future are also topics that come on my desk. Even the marketing department has to reckon with me rapping them on the knuckles if they try to link personal data with their tracking data, what I also monitor. And yes, also the cleaning women in our office is not allowed to access all areas in the office and I'm responsible to ensure she can't do it by defining rules for our employees how they have to handle access requests from other departments or strangers and ensuring appropriate locking systems are installed in the doors and the HR department handles the key distribution correctly.

In short, I have to have my eyes and ears pretty much everywhere. Beside that, I'm creating the data processing activities index, document data flows, creating the data protection impact assessment for the app, and also my usual activities like checking and updating our internal policies and concepts, the risk management for our company (not only for data security but also for business continuity and so on), auditing the IT systems we use etc. need to be done.

But that's exactly what I enjoy so much about my position. Whereas I used to focus almost exclusively on the IT department (apart from the startups I was involved with in the early stages, where you always have to help out in other areas anyway) in my previous positions, in my role as CISO I gain insight into all areas of our company, starting with our office management and ending with the top management, of which I'm now also a part of. I find it exciting to get this overall view of how a company like this works and how all the employees work together like the parts of a well-oiled clockwork. And at the end of the day, you look at the day's work in amazement and see how much has been accomplished in so few hours. If you ever want to run your own company, I can recommend working in the security department of an IT company for a while. Afterwards, you will have much more understanding for the worries and fears of the employees, from the cleaning lady up to the department heads and C-level managers.

FF-Sec says hello

Hello world!

With the output of this line, many people start programming nowadays. Looks like a good start for a blog to me, too. ;)

When I dove into the world of computers over 20 years ago, I could not have guessed the journey I would begin. It was not my first contact with a computer (a KC-85 from the GDR in our school was the first computer I used), but the real journey began when I first installed a Linux on my 386 PC, because Windows (3.11) and gaming bored me. On it I learned my first programming language... C. Ok, not really my first one, because I already wrote an application in a BASIC dialect on the KC-85, but it was mostly copying it from a piece of paper and my understanding for it was like the language was named… very basic. ;)

Of course my first application in C was also a kind of "hello world“, but I added already a condition dependent on a variable I added. But for me it was breathtaking, because I experienced for the first time, how it feels if my machine does what I wanted. Programming became the drug of my choice. Shortly after that I learned assembler, because I wanted to understand all parts of the Linux kernel. Along the way I learned Perl to automate things on my computer. And already 2 years later - 2 years in which I spent about 12-18 hours a day in front of my computer - I started to work as a freelance system administrator and set up Linux servers for different companies. Then at some point a company came along that really wanted to take me on as a permanent employee and made me a good offer. And from that point on, my career went from system administrator to system engineer to DevOps engineer to security engineer and to IT security manager, because data protection and IT security always were important topics for me. And now I'm the CISO of a company in Germany and take care of all the information security topics in our company.

In fact, I never did any training in computer science (instead I trained as a laboratory chemist). I taught myself everything, partly from books, but mostly by trying out and reading the manpages built into Linux and from informations I found in the WWW.

And here I am, in my mid-40s, still addicted to computers. Although I still deal with Linux servers, I now prefer to use macOS as my desktop system. After all, it's also only a Unix. And I often had to deal with Unix during my career. FreeBSD and Solaris were my daily companions at times. About a year ago, my father gave me a Windows PC, because he couldn't do anything with Windows 10 and wanted his XP back. He is nearly 80 years old now and never had internet access. Therefore it’s ok. And I can only say... WTF?! How did such a static, inflexible and untidy system ever make it in the market? As was the case with Windows 3.1(1), it is somehow only useful for gaming. And that's exactly why the PC is now sitting around in my apartment. You can certainly imagine how often it is turned on. It should be about 4-5 months since I used it the last time. :D

What you can expect in my blog? I have absolutely no idea. :D About 10 years ago, when I worked for the blog.de platform (at that time still quite a big blogging platform in Germany, later sold to an Italian company that fucked it up), I blogged regularly. After that I wrote irregularly on different platforms. And even today I still have 2 blogs where I post my pictures that I draw on the iPad and articles around topics that interest me. But I like the idea to write a blog with a note taking app. Somehow it feels like this is how blogging should work. No distraction by creating designs and similar stuff. Simply writing what comes to my mind. And that’s what you can expect here… whatever comes to my mind. It may be about topics related to data protection and information security (yes, for me there is a difference, perhaps I’ll explain it somedays), thoughts about drawing with ProCreate on iPad, my adventures while geocaching (yes, sometimes even I leave my home to face the real world), about what I learn as a dad from my (autistic) children, and it’s a lot what we can learn from children, or any other topics. We’ll see.

Let’s see where this journey leads me to. If my English isn’t perfect, please be indulgent. Even after 20 years I’m not really good at it. But I try my best and I hope my posts are understandable. So… let’s have some fun! See you soon!