Iliya Garakh, CTO

Iliya Garakh, CTO

Latest — Feb 14, 2024

In Passwork 6.3, we have implemented numerous changes that significantly improve organization management efficiency, provide more flexible user permission settings, and increase security:

  • Administrative rights
  • Hidden vaults
  • Improved private vaults
  • Improved settings interface

Administrative rights

Available with the Advanced license

Now there is no need to make users administrators in order to grant them specific administrative rights. This option is a response to one of the most frequent requests from our customers.

Administrators can grant only those rights or permissions that are necessary for users to fulfill their duties and flexibly customize access to settings sections and manage Passwork. For instance, you can grant employees the right to create and edit new users, view the history of user activity, track settings changes, while restricting access to organization vaults and System settings.

You can configure additional rights on the Administrative rights tab in User management. There are four settings sections to flexibly customize Passwork for your business:


In this section, you can grant users access rights to manage all existing and new organization vaults, view the history of actions with settings and users, access license info and upload license keys, view and modify the parameters of SSO settings and Background tasks.

User management

In this section, you can grant users access rights to view and modify User management parameters. This includes performing any necessary actions with users and roles, such as creating, deleting, and editing users, changing their authorization type and sending invitations.

System settings

In this section of settings, you can grant users the right to view and modify specific groups of System settings.

LDAP settings

In this section, you can grant users the right to view and modify LDAP parameters which include adding and deleting servers, registering new users, managing group lists, viewing and configuring synchronization settings.

Activity log

The event of changing user administrative rights has been added to the Activity log. All changes are now recorded in the Activity log, that includes the users who initiated such changes as well as each setting that was modified with its previous and current values.

Interface improvements

Users with additional administrative rights are marked with a special icon next to their user status.

Some items remain unavailable until the necessary settings have been activated. When hovering your cursor over such items, a tooltip with information regarding dependent settings will be displayed.

Hidden vaults

In the previous versions of Passwork only organization administrators were able to hide vaults. Also, only organization vaults could be hidden. In this new version, all users can hide any vaults. Hiding makes vaults invisible only to the users who choose to do it and does not affect others.

Hidden vault management is now carried out in a new window, which is available directly from the list of vaults. You can view the list of all available vaults and customize their visibility there.

Private vault improvements

Displaying private vaults in User management

Besides hiding private vaults, employees with User management access can now see all vaults which they administer (including private vaults). The new feature which makes it possible to add users to private vaults has also been added to User management.

Logging of events in private vaults

Private vault administrators can view all events related to their vaults in the Activity log.

Other changes

  • Fixed an issue which prevented users from changing their temporary master password
  • Fixed an issue which prevented users from setting the minimum length for authorization and master passwords
  • Fixed an issue in User management which made administrator self-deletion possible
  • Minor improvements to the settings interface

Introducing Passwork 6.3

Feb 11, 2024 — 4 min read

Self-signed certificates are widely used in testing environments and they are excellent alternatives to purchasing and renewing yearly certifications.

That is of course if you know how and, more importantly, when to use them. Remember, that A self-signed certificate is not signed by a publicly trusted Certificate Authority (CA). Self-signed certificates are considered different from traditional CA certificates that are signed and issued by a CA because self-signed certificates are created, issued, and signed by the company or developer who is responsible for the website or software associated with the certificate.

You are probably reading this article because for some reason, you need to create a self-signed certificate with Windows. So, we’ve tried to outline the easiest ways to do that. This article is up-to-date as of December 2021. By the way, we’re referring to Windows 10 for all the following tutorials. As far as we know, the processes for Windows 11 are identical.

So what are our options?

Using Let’s Encrypt.

These guys offer free CA certificates with various SAN and wildcard support. The certificate is only good for 90 days, but they do give an automated renewal method. This is a great alternative for a quick proof-of-concept. Other options would require more typing, for sure.

But this option works only if you want to generate a certificate for your website. The best way to start is by going to Getting Started, the instructions thereafter are very easy to follow.

Other one-click option:

We’ve reviewed different online services that allow you to easily generate self-signed certificates. We’ve sorted them from one-click to advanced, and the first one is:

Just enter your domain name — and you are ready to go:

Fill out the following fields:

Press “Next”, then confirm your details, and get your certificate:

It’s that easy!


Among the online services that allow you to generate self-signed certificates, this one is the most advanced; just look at all available options to choose from:

Now let’s continue with offline solutions, that are a bit more advanced:

PowerShell 4.0

1. Press the Windows key, type Powershell. Right-click on PowerShell and select Run as Administrator.

2. Run the New-SelfsignedCertificate command, as shown below.

$cert = New-SelfSignedCertificate -certstorelocation 
cert:localmachinemy -dnsname

3. This will add the certificate to the locater store on your PC. Replace with your domain name in the above command.

4. Next, create a password for your export file:

$pwd = ConvertTo-SecureString -String ‘password!’ -Force -AsPlainText

5. Replace password with your own password.

6. Enter the following command to export the self-signed certificate:

$path = 'cert:localMachinemy' + $cert.thumbprint 
Export-PfxCertificate -cert $path -FilePath 
c:tempcert.pfx -Password $pwd

7. In the above command, replace c:temp with the directory where you want to export the file.

8. Import the exported file and deploy it for your project.

Use OpenSSL

1. Download the latest OpenSSL windows installer from a third-party source;

2. Run the installer. OpenSSL requires Microsoft Visual C++ to run. The installer will prompt you to install Visual C++ if it is already not installed;

3. Click Yes to install;

4. Run the OpenSSL installer again and select the installation directory;

5. Click Next;

6. Open Command Prompt and type OpenSSL to get an OpenSSL prompt.

The next step would be to generate a public/private key file pair.

1. Open Command Prompt and create a new directory on your C drive:

C: >cd Test

2. Now go to the new directory:

C: Test>

3. Now you need to type the path of the OpenSSL install directory followed by the RSA key algorithm:

C: Test>c:opensslbinopenssl genrsa -out privkey.pem 4096

4. Run the following command to split the generated file into separate private and public key files:

C: Test>c:opensslbinopenssl ssh-keygen -t rsa -b 4096 -f privkey.pem

Once you have the public/private key generated, follow the next set of steps to create a self-signed certificate file on Windows.

1. Go to the directory that you created earlier for the public/private key file:

C: Test>

2. Enter the path of the OpenSSL install directory, followed by the self-signed certificate algorithm:

C: Test>c:opensslbinopenssl req -new -x509 -key privkey.pem -out cacert.pem -days 109

3. Follow the on-screen instructions;

4. You need to enter information about your organization, region, and contact details to create a self-signed certificate.

We also have a detailed article on OpenSSL – it contains more in-depth instructions on generating self-signed certificates.

Using IIS

This is one of those hidden features that very few people know about.

1. From the top-level in IIS Manager, select “Server Certificates”;

2. Then click the “Create” button on the right;

3. This will create a self-signed certificate, valid for a year with a private key. It will only work for “localhost”.

We hope this fruit bowl of options provides you with some choice in the matter. Creating your own self-signed certificate nowadays is trivial, but only until you begin to understand how they really work.

Our option of choice is, of course, OpenSSL — after all, it is an industry-standard.

7 ways to create self-signed certificates on Windows

Feb 10, 2024 — 4 min read

Are you having trouble remembering your passwords or accessing your account? Perhaps you’re stressing out that you may have been hacked? Well, in any case, restoring your Facebook account utilising reliable Facebook account recovery solutions shall be covered by this article, so buckle up!

In order to regain access to your Facebook account, you can use one of several automated methods. Many are based on the information you provided when you set up your account, which isn’t helpful if you can’t remember the most important piece of information you provided when you set up the account — your password. Also, some information will be out of date, like your recovery phone number or your active email address.

And even if all methods listed below fail, we’ve got an alternative for you right at the very bottom of the article.

Firstly, make sure that you aren't still logged into Facebook somewhere else!

Android and iOS Facebook apps, as well as mobile browsers may all be used to access the site, so you might be logged in on them.

If you are logged in, you can ‘recover’ your account by simply changing the password, and it can be done without a confirmation reset code!

But if you are not logged into Facebook on other devices or browsers — try Facebook's Default Account Recovery Methods.

If at all feasible, log into your Facebook account using the same internet connection and computer or phone that you've used on a regular basis in the past. If Facebook detects your network and device, you may be able to reset your password without having to provide any extra information to Facebook. But first and foremost, you must authenticate your account.

Find and recover your account by providing contact information

The best option is to directly go to the Facebook Recovery Page.

To sign in, enter an email address or phone number that you previously associated with your Facebook profile. When looking for a phone number, test it both with and without your country code, for example, 1, +1, or 001 for the United States; all three variants should work just fine. Even if it doesn't explicitly say so, you may use your Facebook credentials to log in — instead of your mobile number or email.

Your profile will be summarised once you have successfully identified your account, as seen in the screenshot below. Please double-check that this is indeed your account and that you still have access to the email address or phone number mentioned before proceeding. The option of choosing between email or phone recovery may still be available to you.

If everything appears to be in order with the contact information that Facebook has on file for you, though, click Continue. A security code will be sent to you by Facebook.

Retrieve the code from your email or phone (depending on whatever method you used), input it, and rejoice in the knowledge that you have regained access to your Facebook profile.

At this point, you have the option of creating a new password, which we highly advise you to do.

If you don't receive the code via email, check your spam folder, or make sure you can receive text messages from unknown senders if the code doesn't arrive to your mobile.

If you are still unable to receive the code, choose Didn't get a code? from the drop-down menu. You can return to the previous screen by clicking the X in the bottom-left corner of the Enter Security Code box.

Maybe you'll get lucky and discover that you don't, in fact, have access to the account at all!

Log back into your Facebook account

You should immediately reset your password and update your contact information if you have regained access to your Facebook account after a suspected hijacking.

To keep your Facebook account safe, follow two simple rules. Don't forget to get rid of any email addresses or phone numbers that you no longer have access to. Also, enable two-factor authentication on all of your social media accounts in order to prevent a loss of access in the future.

Don’t forget, the Facebook Help Community is a great place to find answers to your issues.

If all else fails, creating a new Facebook profile might not be as bad as you think

Over the past few years, we've received a large number of letters from users who were unable to regain access to their Facebook accounts, despite following each and every one of the instructions listed above.

Typically, their contact information was out of date, the recovery codes offered by Facebook were ineffective, or the corporation never responded to their request for identification verification. And at that point, you’re pretty much out of options.

You have to accept the fact that you must move on. Even though it's painful, you must learn from your mistakes and register a new user account.

Always include legitimate contact details, don’t forget to up the security on your Facebook account, and completely re-create your profile from the ground up. Despite the inconvenience, it’s a better option than doing nothing. Not to mention, you won’t have any of those embarrassing old photos, and you can only add people as friends that really matter to you now.

How to recover your Facebook account

Jan 19, 2024 — 3 min read

In Passwork 6.2 we have introduced a range of features aimed at enhancing your security and convenience:

  • Bin
  • Protection against accidental removal of vault
  • Protection against 2FA brute force
  • Accelerated synchronization with LDAP
  • Improved API settings
  • Bug fixes in role management


Now, when deleting folders and passwords, they will be moved to the Bin. If needed, they can be restored while preserving previously set access permissions. Vaults are deleted without being moved to the Bin — they can only be restored from a backup.

Who can view deleted passwords and folders in the Bin?

Inside the bin users can see the deleted items from those vaults in which they are administrators. For instance, an employee who is not an administrator of organization vaults will only see the deleted passwords and folders from his personal vaults when opening the Bin.

In addition to object names, the Bin also displays the usernames of people who deleted data. You can also see the initial directory name and the deletion date.

Object restoration

Objects from the Bin can be restored to their initial directory if it has not been deleted or moved. Alternatively, you can choose any other directory where you have edit and higher access levels.

When restoring deleted folders to their initial directories, user and role access levels will also be restored exactly as they were previously manually set in these folders. Other access permissions will be set based on the current permissions in the initial directory.

When restoring folders to a directory different from the initial, access levels will always depend on the current permissions in the selected directory.

Additional access to deleted passwords

If passwords have been shared with users, moving them to the Bin will remove them from the “Inbox” section, and any shortcuts or links to these passwords will become nonfunctional.

Restoring additional access

When restoring from the Bin, it is possible to regain additional access levels to passwords. Passwords that were shared with users will reappear in their “Inbox” section, access to passwords through shortcuts will be restored, and links that have not expired will become functional again.

Bin cleanup

You can delete selected items from the Bin or use the "Empty Bin" button to remove all items contained inside.

It's important to note that in the Bin you only see the items which were deleted from the vaults where you are an administrator. Objects from other vaults are not visible, and clearing the Bin will not affect them.

In future, the option to configure automatic Bin cleanup will be added.

Protection against accidental removal of vault

To confirm the deletion of a vault, you now need to enter its name. It will be permanently deleted along with all the data inside. Additionally, if there are passwords or folders from this vault in the Bin, they will also be removed.

Protection against 2FA brute force

Protection against 2FA brute-force attacks has been added. After several incorrect attempts to enter the 2FA code, the user will be temporarily locked. The number of attempts, input intervals, and the lockout time are set in the config.ini file.

Other changes

  • LDAP synchronization has been accelerated
  • Descriptions of parameters and minimum allowable values for API token expiration time and API refresh token expiration time have been added to the API settings section
  • Automatic assignment of "Navigation" to parent folders in role management has been fixed
  • The issue when a vault administrator could not add roles to a vault and manage its permissions has been fixed
  • The issue with showing additional access rights to passwords when moved to another vault has been fixed

Upgrade Instructions — How to update Passwork
More information about features and prices — on the Passwork website

Introducing Passwork 6.2

Dec 18, 2023 — 3 min read

Over the past decade, data has transitioned from mere information to a precious asset. Numerous enterprises thrive on data, while others crumble with its loss. Customer personal information, analytics, financial transaction records and more hold monetary value. Yes, there's an abundance of informational "clutter" around, but even amid hard-to-spot data, a skilled cybercriminal can discover a gold mine. 

The acceleration of information technology is rapid, with fresh information emerging and being processed every moment. Often, companies simply lack the time to sift the "wheat" from the "chaff" and, as a result, release sensitive data, like customers' home addresses for delivery, into the open. 

Most firms have mastered data collection, some have ventured into processing it, and a fewer number into analyzing it, but not all have grasped how to safeguard it. In this article, we’ll explore what qualifies as sensitive data, how to shield it, and the primary blunders made while handling sensitive data.

What sets apart ordinary data from sensitive data? 

With the trend of data accumulation in the market, corporations have embraced it wholeheartedly. This opens up numerous avenues for growth, business broadening and optimization, and introducing new offerings to the market. For instance, by scrutinizing customer conduct, you can present them with the products they need at the opportune moment. Or, simply, knowing customers' birthdays, send a discount coupon as a present, encouraging a new purchase. The possibilities are myriad, and they stem from entirely diverse data types. That's why enterprises amass data even before understanding its use. It's for the just-in-case scenario. 

Similarly, it's not always feasible to instantly determine the significance of data and the extent of protection required. Some opt for overcaution, storing data securely from the outset, while others leave it in public view, thus risking it. The sensitivity of data can be gauged by asking — what’s the fallout if it’s pilfered? 

Two outcomes exist. Nothing occurs — the data isn't sensitive. The offender, directly or indirectly, could inflict harm on the business or customers. For instance, by pilfering personal data, like full names and phone numbers, and releasing them online, the company’s reputation takes a hit. Or, by stealing an individual’s data — their address, purchasing tendencies, and, say, date of birth, orchestrate a social engineering assault.

Sensitive data encompasses information that could potentially jeopardize its possessor. For regular folks, it’s mainly personal and financial data, medical details, relationship data, personal visuals, and data on preferences. For companies, it includes internal business records, customer and employee databases, confidential documents, market evaluations, and the like. 

Recognizing sensitive data 

The theft or exposure of sensitive data undermines a company's customer privacy, triggers financial setbacks, and could even threaten an organization’s security. Hence, distinguishing sensitive personal data from common data is crucial. This involves carrying out a data classification and risk assessment. 

This could encompass evaluating potential damage in case of a data breach, as well as examining legal mandates for specific data types. Primarily, anything related to sensitive information and personal data should be guarded. However, the task of identifying data types doesn’t conclude here. For instance, trade secrets can be shielded under 21 orders or at your discretion, but personal data must be classified and shielded by law. Information security experts opine that to pinpoint sensitive company data, the IS division, along with representatives from various sectors — accounting, legal, HR, and marketing — should formulate guidelines to identify sensitive information. The primary focus here would be potential financial or reputational harm from information leakage. Yet, the potential threat indicator of a data breach may not always be objective. Numerous cyber incidents involving social engineering demonstrate that even seemingly harmless data about a person can be utilized to perpetrate a crime.

Key blunders in handling sensitive data 

Both enterprises and users can be culpable for sensitive data leakage. On the corporate side, the usual culprit is a basic disregard for information security norms. For instance, unprotected corporate networks, operating on outdated operating systems, or absence of antivirus protection. On the user side — unawareness of cyber hygiene norms and a lack of understanding of what data might be sensitive. Common errors enabling sensitive data leakage: 

• Inadequate password and account safeguards
• Lack of data categorization within the firm
• Improperly set up security systems
• Absence of data encryption
• Employees are untrained in cyber hygiene 

Moreover, information is often undervalued by both corporations and individuals. For instance, a person may deem their passport information crucial but be indifferent about sharing their health information on social networks. Like any other domain of information security, elementary measures are paramount. For example, remembering updates, prompt training of staff in cyber hygiene, and employing protective software.


The subject of sensitive data is steadily gaining traction, as only in recent times have assailants learnt to actively exploit personal or corporate data to commit offenses. For larger and more technologically advanced companies, the issue is being addressed at a more sophisticated level, as they have not only learnt how to analyze and segment data but also how to defend it. However, there's another facet to consider - the company service users themselves. They may possess minimal awareness of the worth of their personal data and trigger leaks.

Sensitive information: distinguishing the crucial from the commonplace

Dec 12, 2023 — 4 min read

Prominent enterprises have endured substantial setbacks due to security breaches within their mobile applications, underscoring the criticality of app security that is often overshadowed by server-side concerns. Contrary to popular belief, mobile apps are not mere interfaces for server data; their vulnerabilities can inflict extensive damage, not limited to a single user but potentially devastating to the business at large. This article aims to elucidate this often-overlooked risk by showcasing notable instances where mobile app vulnerabilities have led to significant financial and reputational harm.

TikTok's multi-faceted security dilemmas 

The year 2020 was marked by significant scrutiny directed at TikTok, a widely used social platform. The app was caught accessing clipboard data on Apple devices without user authorization, a clear invasion of privacy that could potentially lead to the exposure of sensitive personal and professional information. The same period saw the emergence of other security loopholes that provided attackers with the capability to compromise accounts, exfiltrate personal data, or circulate harmful content. The situation was further aggravated by concerns over TikTok's alleged ties to foreign government entities. The controversy was so severe that it led to the app's prohibition in several regions and culminated in a class-action lawsuit that cost the company $92 million in settlements. This series of events underscored the imperative for app developers to meticulously govern data collection practices to safeguard user privacy.

Strava's global heatmap incident 

The fitness-oriented app Strava faced its own share of controversy in 2018 when it released a global heatmap of user fitness activities. What might have been a novel idea turned sour when it inadvertently compromised the safety of military personnel by revealing their movements and even the locations of military facilities. Although Strava claimed that the map was anonymous, resourceful individuals managed to de-anonymize the data, proving that even data represented as anonymous can be reconstructed to reveal identities. This incident sparked a global debate on the security ramifications of sharing fitness tracking data through apps and the potential threats it could pose to individuals and national security.

Starbucks' mobile app compromise

In 2015, Starbucks, the global coffeehouse chain, confronted a serious breach when its mobile app fell victim to an attack. Due to inadequate authentication processes, cybercriminals managed to hijack customer accounts. This security oversight led to unauthorized access to payment details and illegal transactions, leaving customers financially vulnerable and causing a major dent in Starbucks’ corporate image.

WhatsApp and a spate of security breaches

WhatsApp, one of the most popular messaging apps worldwide, wasn't immune to security flaws. In 2019, a vulnerability was exploited to install Pegasus spyware on users' devices, leading to a significant breach of confidential information, including personal messages and call logs. Another flaw, known as "Media File Jacking," was identified, affecting both Android and iOS users. This particular vulnerability allowed cybercriminals to alter media files, replacing them with inappropriate or harmful content. A notably critical issue emerged in 2021, involving WhatsApp's group chat feature, which inadvertently exposed users to phishing and other social engineering attacks due to flawed invitation controls. These incidents collectively contributed to a substantial erosion of trust among WhatsApp users.

Clubhouse's privacy controversy 

Clubhouse, the audio-based social network that gained rapid popularity, faced serious backlash when a significant vulnerability was discovered. The flaw allowed malicious actors to secretly record and broadcast live audio conversations, a blatant violation of user privacy. Furthermore, the transmission of user IDs in plain text made it possible to de-anonymize conversations, adding fuel to the growing privacy concerns. The repercussions included a severe reputational hit and heightened skepticism about the security protocols of emerging social media apps.

Signal's unexpected security flaw 

Signal, an app that prides itself on security, encountered a surprising setback when a vulnerability was discovered, allowing for PIN brute-force attacks. This revelation was particularly alarming given the app's reputation for robust security, and it inevitably affected its perceived reliability.

Zoom's security and privacy scandals 

Zoom, a leader in video conferencing, faced multiple issues in 2020. A vulnerability was exploited by uninvited individuals to intrude on private meetings, leading to the infamous "Zoom-bombing" incidents. Furthermore, misleading claims about the app's encryption standards led to public uproar when it was revealed that Zoom had the technical capability to access private conversations. This forced the company to revamp its encryption system on a tight schedule, incurring considerable costs.

Snapchat's ongoing security struggles 

Snapchat, popular among younger demographics, has had its fair share of security woes. Various vulnerabilities allowed for account breaches and even real-time location tracking, posing a severe threat to user safety and privacy. These issues resulted in negative publicity and declining user engagement.

Uber and Airbnb's security breaches 

Both Uber and Airbnb have experienced security breaches that enabled attackers to take over user accounts. These incidents, involving unauthorized rides and bookings, underlined the critical importance of robust authentication mechanisms and the potential financial and reputational damages stemming from such breaches.

Fortnite’s gaming data breach 

Fortnite, a gaming sensation, hasn’t been spared from security flaws. Vulnerabilities discovered allowed attackers to hijack accounts, make unauthorized in-game purchases, and access sensitive personal data. These incidents brought to light the risks associated with online gaming platforms and the need for enhanced security measures, particularly given the young age demographic of many users.


In summary, it's evident that mobile app vulnerabilities are a widespread issue, often underreported or overlooked by the general populace. Users must recognize the gravity of the personal and sensitive information stored within their devices and the apps they use. It's prudent to avoid reusing passwords, to be wary of suspicious apps, and to exercise caution when sharing information online. In an era where digital threats are increasingly sophisticated, vigilance is our first line of defense.

Unveiling the giants: corporations whose flawed apps inflicted business catastrophes

Dec 11, 2023 — 3 min read

In 2024, the digital finance landscape is increasingly challenged by sophisticated forms of fraud, particularly carding. This type of credit card fraud, involving the unauthorized use of stolen card information, poses significant risks to both individuals and financial institutions. This comprehensive exploration delves into the mechanisms of carding, its evolutionary trajectory in the realm of financial fraud, and the multi-faceted strategies being employed to protect bank accounts in this digitally-dominated era.

Understanding carding

Carding is a complex process initiated by the illicit acquisition of credit card information. This can occur through various methods: 

• Sophisticated hacking operations that breach financial databases
• Phishing schemes designed to deceive individuals into divulging their details
• Large-scale data breaches at major retailers or financial institutions

Once fraudsters acquire this data, they test it to verify its legitimacy and then use or sell it for unauthorized transactions, often leveraging the anonymity of the dark web. The speed and stealth with which carding operations are conducted make them a particularly pernicious and challenging form of financial fraud to counteract.

The evolution of financial fraud

Financial fraud has undergone a significant transformation over the years. Initially, fraudsters employed physical methods like skimming devices on ATMs. However, the digital revolution brought about more complex and less detectable methods, including malware that captures sensitive information and sophisticated phishing operations. These digital methods necessitate equally advanced countermeasures in security and consumer awareness.

Regulatory bodies have escalated their efforts in enforcing data security standards. Financial institutions are now mandated to comply with rigorous data protection regulations, including conducting regular security audits and adhering to cybersecurity best practices. These regulations are crucial in ensuring a baseline of security across the financial sector.

Protecting bank accounts in 2024

Enhanced authentication

In response to these threats, banks have significantly enhanced their security measures. The integration of biometric verification methods, such as fingerprint and facial recognition technologies, has introduced a personalized layer of security challenging for fraudsters to replicate. Additionally, two-factor authentication (2FA), combining knowledge-based (passwords) and possession-based (a mobile device for OTPs) elements, has become a standard security practice, drastically reducing unauthorized account access.

Advanced encryption

Encryption is a cornerstone in securing data transmission. Modern banking involves sophisticated encryption protocols that cloak data during transmission, making it virtually impenetrable to interception and misuse. This ensures that even if data is captured by unauthorized entities, it remains secure and indecipherable.

AI and machine learning

The adoption of artificial intelligence and machine learning has been a game-changer in detecting and preventing fraud. These technologies analyze extensive transaction data, identifying anomalous patterns indicative of fraudulent activity. By quickly flagging these irregularities, banks can proactively address potential fraud, often before customers are aware of any risk.

Secure banking applications

The development of secure banking applications has been a focus for financial institutions. These applications come equipped with features like automatic logout after periods of inactivity, fraud alert systems, and encrypted communication channels for reporting suspicious activities. Such features empower customers to safely manage their accounts and contribute to the overall security framework.

Consumer education

Consumers are essential in safeguarding their financial information. Vigilant monitoring of account activities, cautious sharing of personal information, and using secure networks for online banking are fundamental preventive measures. Prompt reporting of any anomalies or suspicious activities to their banks is also vital in preventing the escalation of potential fraud.

Educating these consumers is pivotal in the fight against financial fraud. Banks are actively investing in campaigns to heighten awareness about safe online practices, such as recognizing phishing attempts, using secure networks for financial transactions, and the criticality of promptly reporting unusual account activities.


Given the dynamic nature of financial security, continuous collaboration across sectors is imperative. Financial institutions, technology companies, and law enforcement agencies must maintain open channels of communication and strategy sharing. Ongoing innovation in security technologies and consistent consumer education are critical in staying ahead of evolving threats.

As we proceed through 2024, the safeguarding of bank accounts from threats like carding requires an integrated approach. This strategy involves leveraging cutting-edge technology, enforcing strict regulatory measures, cultivating informed consumer habits, and maintaining constant vigilance. By comprehending the complexities of financial fraud and adopting comprehensive, proactive security measures, we can aim for a more secure financial environment for all participants.

Navigating financial security: carding and bank account protection in 2024

Nov 10, 2023 — 4 min read

In the current digital landscape, where we frequently engage in conversations without visual context, our reliance on audio cues to verify the identity of our conversational partners has intensified. Our brains have developed an astonishing ability to discern and recognize the intricate details in someone’s voice, akin to an auditory signature that is unique to each individual. These vocal signatures, composed of elements such as pitch, pace, timbre, and tone, are so distinctive that we can often identify a familiar voice with just a few spoken words. This remarkable auditory acuity serves us well, but it is under threat by the advent of advanced technologies capable of simulating human voices with high accuracy—voice deep fakes.

What are deep fakes? 

The term 'deepfake' has quickly become synonymous with the darker potential of AI. It signifies a new era where artificial intelligence can manipulate reality with precision. Early deepfakes had their tells, but as the technology has progressed, the fakes have become almost indistinguishable from the real thing. 

The entertainment industry's experimentation with deep fakes, such as the lifelike replicas of celebrities in a TV show, serves as a double-edged sword. It showcases the potential for creative innovation but also hints at the perils of AI in the wrong hands, where the distinction between truth and fiction becomes perilously thin.

The creation of voice deep fakes is rooted in complex AI systems, particularly autoencoders, which can capture and replicate the subtleties of human speech. These systems don't just clone voices; they analyze and reproduce the emotional inflections and specific intonations that make each voice unique.

The implications are vast and varied, from actors giving performances in multiple languages without losing their signature vocal emotion, to hyper-personalized virtual assistants. Yet, the same technology also opens avenues for convincing frauds, making it harder to trust the unseen speaker.

The dangers of convincing voice deep fakes

Crafting a voice deepface is a sophisticated endeavor. It involves a series of complex steps, starting with the collection of voice data to feed into AI models. Open-source platforms have democratized access to this technology, but creating a voice deep fake that can pass for the real thing involves not just the right software but also an expert understanding of sound engineering, language nuances, and the intricate details that make each voice distinctive. This process is not for the faint-hearted; it is a meticulous blend of science and art.

The misuse of deepfake technology has already reared its head in various scams, evidencing its potential for harm. Fraudsters have leveraged these fake voices to imitate CEOs for corporate espionage, mimic government officials to spread disinformation, and even duplicate voices of family members in distress as part of elaborate phishing scams. These incidents are not simply one-off events but indicative of a troubling trend that capitalizes on the inherent trust we place in familiar voices, turning it against us.

The path that deepfake technology is on raises profound questions about the future of trust and authenticity. Currently, the most advanced tools for creating deep fakes are closely held by technology companies and are used under strict conditions. But as the technology becomes more accessible, the ability to create deep fakes could fall into the hands of the masses, leading to widespread implications. This potential democratization of deepfake tools could be a boon for creativity and individual expression but also poses a significant threat in terms of misinformation, privacy, and security.

The defense against deep fakes: a multifaceted approach

To tackle the challenge of deep fakes, a robust and varied approach is essential. Researchers are developing sophisticated detection algorithms that can spot signs of audio manipulation that are imperceptible to the human ear. Legal experts are exploring regulatory measures to prevent misuse. And educational initiatives are aiming to make the general public more aware of deep fakes, teaching them to critically evaluate the media they consume. The effectiveness of these measures will depend on their adaptability and continued evolution alongside deepfake technology.

Awareness is a powerful tool against deception. By educating the public on the existence and methods behind deep fakes, individuals can be more vigilant and less susceptible to manipulation. Understanding how deep fakes are made, recognizing their potential use in media, and knowing the signs to look out for can all contribute to a society that is better equipped to challenge the authenticity of suspicious content. This education is vital in an era where audio and visual content can no longer be taken at face value.

Navigating the ethical landscape of deepfake technology is critical. The potential benefits for creative industries, accessibility, and personalized media are immense. Yet, without a strong ethical framework, the negative implications could be far-reaching. Establishing guidelines and best practices for the responsible use of deepfakes is imperative to prevent harm and to ensure that innovation does not come at the cost of truth and trust.


As voice deep fakes become more advanced, they pose a significant challenge to the trust we place in our auditory perceptions. Ensuring the integrity of our digital communications requires not just caution but a comprehensive strategy to navigate this new terrain. We must foster a society that is equipped to recognize and combat these audio illusions—a society that is as critical and discerning of what it hears as it is of what it sees. It is a complex task, but one that is essential to preserving the fabric of trust that binds our digital and real-world interactions together.

The trustworthiness of sound in the age of voice deepfakes

Sep 21, 2023 — 4 min read

Information security (IS) courses are needed not only for IS department employees and not even only for certain employees of a company but for everyone. Information security training in today's world, where virtually all areas of life have been digitized, should be on par with fire safety and other fundamental rules that employees are required to observe in the workplace.

Even the most ordinary employee today has access to corporate email or other means of communication within the company, as well as internal information systems and archives. If they do not know the basic rules of cyber hygiene or do not update them in a timely manner, they can become a springboard for attackers to access sensitive company data.

In this article, we will discuss the importance of cyber hygiene in the enterprise, why this knowledge needs to be updated, the pros and cons of in-house and third-party cyber security courses, and what should be included in a cyber hygiene course.

Why cyber hygiene training matters

Almost every aspect of modern business operations has been transformed by the digital revolution. From communication to data storage, the modes have shifted from tangible to virtual platforms. This transition has been a boon in many ways. However, the virtual world, much like its physical counterpart, is not devoid of threats.

Previously, threats were physically visible, like a fire, necessitating measures like fire drills. Today, threats are more intangible, often lurking behind an innocent-looking email or website link. This paradigm shift means that cyber safety has become as crucial as any other workplace safety protocol. It is here that cyber hygiene courses play a significant role.

Most employees, regardless of their designation or role, interface with digital tools like emails, messaging platforms, and digital databases. This widespread access to digital tools, while indispensable, poses a significant risk. If employees are not equipped with basic cyber hygiene knowledge, even unintentional actions can expose the entire organization to threats.

A deep dive into the essentials of cyber hygiene training

Every company has a diverse set of employees, from those in HR and marketing to IT professionals, legal experts, and database managers. While their roles vary, their interaction with the company's IT infrastructure is common, necessitating cyber hygiene training for all in the following categories:

Awareness of threats. The digital realm has a vast array of threats, from phishing attacks to malware and deceptive social engineering tactics. Comprehensive training ensures employees can identify these threats, mitigating potential risks.

Password protocols. A strong password can often be the first line of defense against cyber-attacks. Employees need guidance on creating robust passwords and the benefits of two-factor authentication.

Data protection. Data is often referred to as the 'new oil.' In a business context, data can include everything from company secrets to customer details. Understanding the importance of data and the protocols for its protection is paramount.

Incident management. Not every threat can be prevented. However, swift action can often limit the damage. Employees should be trained to recognize unusual activities and report them promptly.

The in-house vs. third-party training dilemma

Recognizing the need for training is only the beginning. The subsequent challenge is determining the optimal delivery method. The question arises: should businesses lean on their established in-house IT teams, or would it be wiser to seek the expertise of external professionals?

When considering the Advantages of Engaging External Specialists, several factors stand out:

Proficiency. Firms in the cybersecurity domain naturally offer a reservoir of specialized knowledge. Their immersion in the field ensures that they bring a high degree of expertise to the table.

Bespoke solutions. Each business has its nuances. Recognizing this, external specialists are adept at fashioning strategies that are uniquely tailored, focusing on a company's specific requirements and vulnerabilities tied to their industry.

State-of-the-art tools. Another compelling reason to consider them is their familiarity with the latest in the cybersecurity landscape. These experts have their fingers on the pulse, utilizing cutting-edge tools and being aware of evolving threat scenarios. This ensures training remains pertinent and forward-thinking.

Yet, it's essential to balance these benefits with potential drawbacks. Among the challenges of relying on external expertise are the costs involved, which can stretch the budget, especially for larger organizations. There's also the concern that an external perspective might miss nuances inherent to a company's internal processes and culture.

On the other hand, the Benefits of Internal Training are manifold:

Personalization. An internal approach offers a distinct advantage in its adaptability. Companies can sculpt the training, ensuring it's laser-focused on their infrastructure and unique challenges.

Autonomy. Having in-house training offers an unparalleled level of control. Every stage, from conception to delivery, remains under the company's purview.

The economic perspective. While the outset might require considerable investment, the long-term financial implications of in-house training can often skew towards being more economical.

Yet, as with all strategies, there are inherent challenges to consider. Relying solely on in-house capabilities can sometimes lead to gaps in expertise. Limited resources might also become a constraint, and the time commitment to design and implement robust training modules shouldn't be underestimated.

Ensuring periodic refreshers

Experts recommend holding awareness courses at least once a year to update employees' knowledge and skills. Whenever new technologies are introduced or new security threats emerge, separate training should be conducted. This is to ensure that knowledge is up-to-date and to avoid compromising the effectiveness of the company's defenses.

Employees who have already taken a full course do not need to retake the same course every year. Testing can be done to determine if the employee has lost knowledge, as well as new training on updates and new attack techniques. For those employees who have forgotten some of the topics and have difficulty with them, a shortened version of the training can be scheduled. Also, knowledge should be updated after a cyber incident and a case study should be conducted.

The bottom line

Cybersecurity today has become an important part of human security. While in the past it was more common to steal money in the underground or on the street, today it is increasingly being done online. Whereas in the past, attackers only had physical opportunities to harm a business, today any company can be attacked online.

Statistically, the most common cause of a cyberattack is human error. That's why employee cyber hygiene training is the foundation of all the basics in a company's information security. No matter how advanced anti-viruses are installed, no matter how professional the IS department is, one small mistake by an ordinary manager and the company's database is in the hands of attackers or a malicious program enters the company's network.

Regular sessions on cyber security not only prevent such incidents but can also help raise threat awareness and strengthen the security culture of an organization.

The necessity of cyber hygiene training in today's digital world

Sep 18, 2023 — 4 min read

Augmented Reality (AR) has made a huge impact on various sectors, ranging from gaming and entertainment to healthcare and industrial applications. As AR technologies evolve, concerns regarding their safety are becoming more prominent. This article analyzes the safety aspects of AR technologies through various lenses — user health, data security, and public safety.

User health

One of the foremost concerns regarding AR technology is its impact on user health. Extended usage of AR glasses or headsets can lead to eye strain, fatigue, and in severe cases, altered perception of the real world. For instance, Microsoft’s HoloLens, a pioneering AR headset, initially faced criticism for causing discomfort during extended use. Manufacturers have since been focusing on reducing weight and improving ergonomics. Moreover, as AR applications can be very immersive, there's a risk of physical accidents due to users not being fully aware of their surroundings. Niantic’s Pokemon Go, an AR game, reported several accidents where users, engrossed in the game, inadvertently put themselves in danger. To counter this, developers are now integrating real-world awareness features in applications to alert users of potential hazards.

Data security

As AR technologies often require access to sensitive data, such as user location and preferences, ensuring data security is imperative. Cyber-attacks aimed at AR devices can compromise personal information and in cases of industrial applications, trade secrets. For instance, the AR application Wikitude, which provides information about nearby locations, requires access to the user's location data. A breach could reveal sensitive information about a user’s movements. To mitigate such risks, companies are employing end-to-end encryption and robust authentication methods. Also, adherence to data protection laws such as the General Data Protection Regulation (GDPR) is essential.

Public safety

AR technologies have the potential to impact public safety. For instance, the use of AR in automobiles for navigation and information display should not divert the driver’s attention and contribute to accidents. Furthermore, as AR becomes prevalent in public spaces, such as shopping malls or airports, ensuring that AR content does not create panic or confusion is vital. In the automotive industry, companies like Hyundai are integrating AR in their Head-Up Displays (HUDs) to ensure that the information is non-intrusive and genuinely aids the driver without causing distractions.

Regulatory scrutiny

Regulatory bodies worldwide are keeping a close watch on AR technologies. The US Food and Drug Administration (FDA), for example, is actively involved in regulating AR devices used in healthcare. AR applications in surgeries and diagnostics should comply with safety and efficacy standards. Additionally, the Federal Trade Commission (FTC) may also step in to ensure that AR advertising does not mislead or harm consumers.

Social implications

It's also crucial to consider the social implications of AR. For instance, Google Glass faced backlash due to privacy concerns, as people around the wearer were unaware if they were being recorded. AR technologies must respect social norms and privacy expectations. Users should be informed and in control of the data that AR applications access and share.

The industrial aspect

In industries, AR is used for training, maintenance, and complex assembly tasks. While it enhances productivity, it is vital to ensure that AR does not compromise worker safety. Lockheed Martin, an American global aerospace, defense, and security company, employs AR for assembly and manufacturing processes. They have integrated safety protocols that make sure that AR usage complies with workplace safety norms.

Future directions

To ensure that AR technologies remain safe as they continue to evolve, a multi-pronged approach is needed. This includes continuous evaluation and improvements in hardware design, stringent data security measures, adherence to regulatory norms, and public awareness campaigns. The development of international safety standards for AR technologies could also be instrumental in ensuring a globally accepted safety benchmark.

Human factors

One aspect that needs further investigation is how AR affects human psychology and behavior. In educational environments, for example, AR can be a powerful tool. However, over-reliance might impede critical thinking and problem-solving skills if not implemented thoughtfully. It's essential that educationalists and psychologists work together with technology developers to create AR content that enhances learning while nurturing essential life skills.

Economic concerns

The economic implications of AR should not be ignored. As industries adopt AR for various applications, the job market dynamics are likely to change. On one hand, AR can enhance productivity and create new avenues for employment, but on the other hand, it can also render certain jobs obsolete. Governments and policymakers need to be proactive in identifying such trends and ensuring that the workforce is prepared to adapt to these changes.

Privacy paradox

The privacy concerns associated with AR are particularly challenging. As AR devices become more integrated into our daily lives, the line between what is private and what is not begins to blur. For example, future AR glasses might be capable of facial recognition and instant background checks. While this could be useful in certain scenarios, it raises significant ethical and privacy concerns. In this context, robust and adaptable privacy laws are crucial. Users should have the autonomy to control the data they share and understand the implications.

Ethical considerations

Beyond privacy, there are broader ethical questions to consider. As AR can manipulate the perception of reality, there is potential for misuse. The deepfake technology, for instance, can be combined with AR to create hyper-realistic forgeries that can deceive individuals. This could have serious implications in terms of misinformation, fraud, and personal safety. Ethical guidelines and regulations that specifically address such concerns are required.

Digital divide

As AR technologies advance, there is also a risk of widening the digital divide. Those who have access to these technologies may have significant advantages in terms of education, employment, and social opportunities compared to those who don’t. Ensuring that AR technologies are accessible and affordable is an important societal challenge that needs addressing.


AR technologies are a double-edged sword. On one side, they hold immense potential in enhancing our capabilities and experiences; on the other, they bring along a host of concerns pertaining to health, data security, public safety, ethics, and social equity. As technology continues to evolve, a multidisciplinary approach involving technologists, psychologists, lawmakers, ethicists, and the public is essential in ensuring that the development of AR is guided by principles that prioritize human safety and well-being. The road ahead should be paved with innovation, but caution should be the guiding light.

How safe are AR technologies?

Sep 13, 2023 — 4 min read

In the modern, fast-moving era, mobile banking has emerged as the go-to banking method for a vast majority. The allure of accessing your bank account from any location at any moment has indeed contributed to its widespread adoption. Yet, this ease of access is not without its drawbacks, primarily in the form of potential security breaches in mobile apps.

Mobile banking applications have turned into a hotspot for cybercriminals, incessantly seeking opportunities to exploit any weak points present in these apps. A security infringement in a mobile banking app can have catastrophic repercussions, affecting not only the individual user but also the banking institution at large.

In this article, we delve into the prevalent security threats that mobile banking apps are prone to, and the preventative measures that can be adopted to counteract these threats. But first, let's delve into the reasons behind the susceptibility of banking apps to such threats.

What makes banking apps prone to attacks?

The popularity of mobile banking apps among cybercriminals is hardly surprising. These apps harbor confidential data, including account details and personal identification information, which can be manipulated to siphon off funds or perpetrate identity fraud. Moreover, the extensive user base of these apps globally makes them a lucrative target for cyber assaults. While mobile banking apps offer a seamless user experience, they inadvertently create substantial security loopholes for both the users and the financial entities involved.

The security gaps in banking apps can facilitate unauthorized access to user accounts, data theft, and unauthorized fund transfers, among other issues. Cybercriminals might employ phishing schemes or other social engineering strategies to deceive users into disclosing confidential information or installing malware on their devices. Furthermore, these security lapses can tarnish the reputation of financial organizations. A data leak or other security incidents can diminish customer confidence and harm the brand's image. Several factors contribute to these risks, including:

Complexity. Contemporary banking apps are laden with a plethora of features aimed at enhancing user convenience. However, this complexity also escalates the difficulty in securing the apps, as each new feature potentially introduces new vulnerabilities.

Third-party integrations. A significant number of mobile banking apps depend on third-party code libraries and frameworks for functionalities like payment processing and data storage. These components, although handy, can pose security threats if not adequately scrutinized for vulnerabilities.

User conduct. Users can inadvertently augment the vulnerability of banking apps by opting for weak passwords, reusing passwords across various accounts, or neglecting timely security updates.

Indeed, these elements collectively render banking apps a lucrative target for attackers. Therefore, it is imperative for financial institutions to fortify their mobile apps to safeguard user data and assets. Having understood the vulnerabilities, let's now explore the specific threats that mobile apps are exposed to.

Identifying common vulnerabilities in banking apps

Cybercriminals are perpetually scouting for weaknesses in these apps to exploit and gain unauthorized access to user accounts. Despite the security protocols in place to shield user data, here are some prevalent vulnerabilities that could undermine mobile banking security:

Inadequate data protection. Mobile banking apps sometimes store sensitive details like user credentials and transaction histories on the device itself. If not encrypted or securely stored, this data can be an easy target for attackers.

Interception attacks. Man-in-the-middle (MITM) attacks happen when a hacker intercepts the communication between the user's device and the app's server, allowing them to view and alter the transmitted data, including login details and financial transactions.

Weak authentication protocols. Insufficient authentication methods, such as basic passwords or lack of multi-factor authentication, can facilitate easy access to user accounts for attackers. Hence, robust lockout systems, along with multi-factor authentication, should be implemented to prevent brute-force attacks.

Service sharing. Mobile banking apps sometimes share services with other apps on a device, creating potential security risks if those apps are susceptible to attacks.

Flawed encryption techniques. Encryption is vital for safeguarding sensitive data. However, if a banking app employs weak or improperly implemented encryption algorithms, it can be easily bypassed by attackers. Code Manipulation Attackers might alter the app's code by adding or modifying malicious code, enabling them to access confidential data or seize control of the app.

Exploiting app vulnerabilities. Attackers might exploit vulnerabilities in the app itself, arising from insecure coding practices or outdated software components. A notable instance is the 2016 incident where hackers siphoned off $81 million from the Bangladesh Central Bank by exploiting a flaw in the SWIFT payment system utilized by the bank.

These vulnerabilities can severely compromise mobile banking security, potentially leading to financial losses and identity theft. Hence, it is vital for app developers to establish stringent security protocols.

How to fortify your mobile banking apps?

To guarantee the integrity of mobile banking apps, it is essential to adopt potent security strategies. In this segment, we will outline some of the most effective security protocols to shield against common app vulnerabilities:

Data encryption. Encrypting data is a potent security strategy that renders the data unintelligible to those without the decryption key, thereby thwarting attempts to misuse encrypted sensitive data.

Multi-factor authentication (MFA). MFA necessitates users to furnish multiple forms of verification before accessing their accounts, adding an additional security layer to mobile banking apps.

Application strengthening. Application strengthening involves altering the app's code to hinder reverse engineering attempts. This includes code obfuscation, data encryption, and incorporating anti-tampering mechanisms, making it challenging for attackers to retrieve sensitive data or alter the app.

Frequent updates. Regular updates to mobile banking apps are essential to address any existing security gaps. These updates often encompass bug resolutions and security enhancements, urging users to keep their apps updated to fend off emerging threats.

It is vital for top-tier management to recognize the significance of implementing robust security protocols in mobile banking apps. This not only safeguards customer data but also preserves the brand's reputation. A data breach can incur substantial financial and reputational losses. Hence, utilizing platforms like GuardRails can facilitate easier vulnerability detection and rectification, streamlining the process for both security and development teams.


While mobile banking apps have transformed financial management, they have introduced significant security concerns. Given the growing reliance on mobile apps for banking transactions, safeguarding mobile banking security is paramount to prevent financial and reputational damage. It is incumbent upon both individuals and organizations to comprehend the risks and adopt necessary precautions against potential threats. Banks and financial institutions must establish robust security protocols to protect customer data and finances. We have highlighted some prevalent banking app vulnerabilities and potential mitigation strategies. Regular security assessments, staff training, and customer awareness are crucial to maintaining a resilient mobile banking security stance. By adopting these strategies, banks can substantially diminish the risk of cyber-attacks, safeguarding customer assets and data, and ensuring that the convenience of mobile banking does not compromise security.

The price of accessibility: unveiling the greatest security hazards in mobile banking applications

Sep 11, 2023 — 3 min read

In the interconnected world of the 21st century, social media platforms have seamlessly integrated into our daily lives. They have revolutionized the way we communicate, share information, and even conduct business. These platforms, while fostering global connections and instant communication, also present a double-edged sword, especially for corporate entities. The delicate balance between accessibility and security is a tightrope that many companies grapple with, often finding themselves at a crossroads.

The allure and perils of unrestricted access

The digital age has ushered in an era where information is at our fingertips. The modern employee, driven by a desire to stay connected and informed, often finds the allure of unrestricted access to social media hard to resist. While tools like anonymizers, VPNs, and TOR provide gateways to this vast world, they also inadvertently open Pandora's box of cyber threats. These backdoors, often overlooked, can be exploited by seasoned cybercriminals, leading to catastrophic data breaches, significant financial losses, and irreparable damage to reputations. This begs the question: at what cost does this unrestricted access come?

The digital footprint

Every click, post, like, share, or comment on social media platforms contributes to an extensive digital trail. This trail, visible to anyone with the right tools, can be a goldmine for cybercriminals. By meticulously combing through this data, malicious entities can construct detailed profiles, targeting not just individuals but entire corporate hierarchies. The weaponization of this information can manifest in various sinister ways, from spear-phishing campaigns targeting specific employees to broader, more devastating attacks on a company's digital infrastructure. The depth and breadth of this footprint often go unnoticed until it's too late.

Historical context: lessons from past breaches

History is replete with examples that underscore the vulnerabilities tied to social media. The 2013 breach of the Associated Press's Twitter account serves as a grim reminder. Hackers disseminated false information about a terrorist attack, causing widespread panic and a temporary stock market crash. Similarly, the 2011 attack on RSA, a renowned system developer, highlighted the dangers of seemingly innocuous phishing emails. These emails, sourced from data harvested from social media, contained malicious links that, once clicked, wreaked havoc on the company's systems. More recent incidents, like the one faced by Elara Caring in 2020, further emphasize the ever-present and evolving nature of these threats.

The multifaceted nature of social media threats

The digital realm is vast, and so is the spectrum of threats emanating from social media. Phishing attacks, where cybercriminals don the guise of trustworthy entities, are becoming increasingly sophisticated. Corporate espionage, where competitors or rogue actors siphon confidential information for financial or strategic advantage, adds another layer of complexity. Even actions that seem benign on the surface, like an employee sharing a casual photo from their workstation, can inadvertently disclose confidential information. The ripple effects of such breaches can be far-reaching, affecting not just the immediate organization but also its stakeholders.

Towards a comprehensive security strategy

In the face of such multifaceted threats, a piecemeal approach to security won't suffice. Companies need a comprehensive, holistic strategy. This involves regular employee training not only to equip employees with the tools to recognize potential threats, such as phishing emails but also to instill a culture of vigilance and best practices for online behavior. The nuances of password security, the importance of two-factor authentication, and the need for restricted access rights are foundational pillars that need to be emphasized.

However, human vigilance alone isn't enough. The rapid advancements in technology have armed companies with powerful tools like AI and machine learning. These technologies, capable of analyzing vast datasets swiftly, offer a proactive approach to security. They can detect anomalies, identify potential threats in their nascent stages, and even block malicious attempts, such as phishing emails before they reach their intended targets.

The collaboration further strengthens this security framework. In the vast expanse of the digital realm, no company stands alone. By forging strategic alliances with external partners, including cybersecurity firms and industry peers, companies can share insights, pool resources, and present a united front against cyber threats. This collaborative ethos ensures that knowledge and expertise are continuously exchanged, enhancing the collective security posture.

Lastly, adaptability is key. The digital threats of today might not be the same as those of tomorrow. A robust security strategy is dynamic, evolving in response to new challenges and threats. Feedback mechanisms, where employees can promptly report suspicious activities, coupled with regular audits and assessments, ensure that security measures remain agile and ahead of potential threats.


The intricate dance between social media and corporate security is a testament to the challenges and opportunities of the digital age. While the threats are formidable, a proactive, informed, and collaborative approach can keep them at bay. In this ever-evolving landscape, security is not just an IT concern; it's a collective responsibility. By fostering a culture of awareness, vigilance, and collaboration, corporations can navigate the digital realm confidently, reaping its benefits while ensuring their assets remain secure.

The digital dilemma: navigating social media's threats to corporate security

Sep 6, 2023 — 4 min read

Within the spheres of information systems and software development, the role of test servers is undeniably essential. Test servers are purpose-built environments designed to experiment, examine new features, and test software updates without posing any threat to the stability and continuity of the main operational systems.

However, the nature and purpose of these test servers inherently introduce an array of information security risks. In this more extensive discussion, we will delve deeper into the central cybersecurity issues associated with the operation of test servers, and propose potential countermeasures and protective strategies.

Unpacking the core problem

A prevalent misconception among developers and IT professionals is that test servers represent an insignificant component within the larger company infrastructure. Consequently, they often exhibit a level of nonchalance towards these servers' security, believing that any attack, compromise, or system failure will not impact the primary infrastructure's operation.

Simultaneously, the Information Security (IS) departments within organizations often relegate test server security to a lower priority, given the servers' perceived secondary status compared to the primary, production-grade infrastructure, which typically enjoys robust technical and organizational protection measures.

However, despite this dismissive attitude, test servers frequently handle sensitive data during the testing and debugging process. This data can range from main infrastructure configuration elements to the personal data of clients or employees. The result is a precarious situation where developers are utilizing sensitive data in an environment with minimal control and oversight, and the IS department is without the necessary resources and technical wherewithal to guarantee the security of this process. Given this scenario, an incident becomes not a matter of if, but a matter of when it will occur.

Incidents of note involving test servers

Due to their generally weaker protection measures compared to the main infrastructure, test servers can become attractive targets for cyber attackers. Malevolent actors can exploit these servers as a backdoor into the main infrastructure or gain unauthorized access to sensitive company data. This risk is clearly exemplified in several high-profile incidents:

Uber, in 2016, was subjected to a significant security breach related to their test server. Intruders were successful in accessing Uber's GitHub repository that stored archived files of application code. As a direct consequence of this incident, the perpetrators were able to access sensitive data, including comprehensive user and driver details.

Facebook, in 2013, fell victim to a data breach caused by insecure configuration and setup of a test server. The attackers managed to access a test server loaded with various development and testing tools. As a result, the personal data of over 6 million users were compromised, showcasing the potential harm from such incidents.

British Airways, in 2018, suffered a security breach that impacted their test server. Attackers intercepted data, including the personal and financial information of over 380,000 customers, by injecting malicious code into the airline's test server.

These incidents not only underscore the fact that the issues surrounding test servers can affect a wide array of industries but also emphasize that a security breach does not always necessitate an external hacker or intruder.

Pressure points and their protective measures

Test servers are generally configured to favor the IT department's ease of use, thus inadvertently leading to conventional security issues such as weak passwords and a lack of access restrictions. While such configurations might provide comfort to developers, they pose serious implications for overall information security. Some common issues related to test server security are:

Data sensitivity. It's common for companies to overlook the necessity to disguise or mask data used for testing. Similarly, it's not unusual for passwords for the test infrastructure to remain unchanged for extended periods.

Protection levels. Regular servers typically have more robust protection measures such as firewalls, intrusion detection systems, and intrusion prevention systems. On the contrary, test servers, which are meant for simplified operation and testing, frequently lack these powerful security mechanisms. These servers usually belong to a separate network infrastructure that offers a lower level of protection.

Access control. In the case of test servers, all users commonly have the same high-level permissions, making the infrastructure susceptible to breaches due to weak or duplicated passwords.

Vulnerabilities and bugs. Test servers, being the platform for new features and updates, may often contain older software versions, potentially brimming with exploitable vulnerabilities.

To tackle these issues, one primary protection method is to never use sensitive data in its unprocessed form on test servers. Data masking, despite being resource-intensive, can significantly decrease the severity of a potential leak.

In addition, the importance of a well-structured regulatory framework cannot be overstated. Even with minimal resources, adhering to a set of clear, structured regulations can greatly enhance the security of the test infrastructure.

Final thoughts

Security measures for test servers form a vital part of the overall development and testing process. Although dealing with test servers carries inherent risks that can have severe consequences for the company and its users, implementing appropriate security measures can greatly minimize these risks.

Key steps towards secure test servers include the isolation of test servers on a separate network, deployment of robust authentication and authorization mechanisms, regular server updates and configurations, restrictions on access to test data, and frequent vulnerability checks.

Security should not be an afterthought, but rather an integral part of every phase of development and testing. By instilling strong security measures, adhering to industry best practices, and regularly updating your security policies in line with the latest information security trends, the risks associated with test servers can be substantially mitigated. This ensures that data confidentiality and integrity are preserved, protecting your company from potential threats and incidents associated with test servers.

Test servers: the pitfalls of information security

Aug 22, 2023 — 4 min read

When it comes to investing in company security, there are different approaches. Some organizations allocate substantial funds to proprietary solutions offered by vendors, while others opt to develop their own SIEM (Security, Information and Event Management systems) using open-source code.

The question arises: which option is more cost-effective in practice? Should one pay for a proprietary solution or rely on open-source alternatives? In this article, we delve into the realm of free SIEM solutions used in companies today, as well as the reasons why information security specialists often exhibit reluctance towards them.

A closer look at open-source SIEM systems

The appeal of open-source solutions increases with fewer restrictions. The most popular free SIEMs possess the ability to handle any number of users and data, offering scalability, and garnering support from the IT community.

Among the top-tier open-source SIEM systems, you’ll find:

AlienVault OSSIM SIEM. A version of AlienVault USM, a leading solution in this domain worldwide. Users gain access to a free framework encompassing intrusion detection systems, network and host monitoring, vulnerability scanning, and other open-source tools.

MozDef. Developed by Mozilla, MozDef is a SIEM system created from scratch. Similar to AlienVault OSSIM SIEM, it is built upon tried and tested open-source projects. The developers claim that MozDef can handle over 300 million events daily.

Wazuh. Originally developed within another open-source SIEM system called OSSEC, Wazuh evolved into a standalone product. It is capable of simultaneously collecting data through agents and system logs. Wazuh boasts a modern web interface, REST API, and an extensive set of rules.

OSSEC SIEM. Often referred to as the older sibling of Wazuh, OSSEC SIEM is widely recognized in the information security community as a reliable free intrusion detection solution.

Sagan. This SIEM tool specializes in real-time analysis of network inputs and the evaluation of their correlations. Its high performance stems from a multi-threaded architecture.

Prelude OSS. Serving as an open-source counterpart to the paid Prelude SIEM system from French developer CS, Prelude OSS supports various log formats and seamlessly integrates with popular open-source tools developed by others.

Additionally, companies often employ other free products like ELK SIEM, Snort, Suricata, SecurityOnion, Apache Metron, and more to construct their own systems. Many of these options are limited versions of proprietary software offered by vendors to familiarize users with their core systems.

When open source code is appropriate

One popular reason for implementing open-source SIEM today is to test-drive commercial systems, even with a minimal set of features. Free open-source versions allow professionals to evaluate expensive products in a live environment and gain insights into their inner workings.

Moreover, an open-source SIEM system becomes a viable choice when an organization can engage a large team of programmers. Any open-source solution necessitates further development and adaptation to fit seamlessly within the company's IT infrastructure. If there is no team available to handle these tasks, the utilization of free solutions loses its purpose.

One of the main challenges faced by companies employing open-source software is the lack of qualified specialists. Developing and maintaining such SIEM systems requires experienced Linux administrators, analysts, and experts proficient in connecting new sources, developing correlation rules, designing dashboards, and more. Given that freeware often comes with minimal features and customization options out of the box, significant work is involved, particularly during the initial months post-implementation.

These factors can impact the total cost of ownership of a system. Consequently, Open Source SIEM is a viable choice only for those who possess a thorough understanding of their requirements and have the necessary resources.

Challenges in open-source SIEM

There is a saying that "Linux is only free when you don't value your time." The same holds true for open-source SIEM tools. Difficulties in product improvement contribute to the compromised security of open-source products. Addressing identified vulnerabilities can often take weeks or even months, providing an opportunity for cybercriminals to exploit them.

There are other notable considerations when it comes to open-source SIEM. Specifically, an open-source system. For example, it lacks official technical support: User queries regarding installation and maintenance of free solutions are typically addressed by fellow users, rather than a dedicated owner-developer of the software. Moreover, it may simply cease to exist. Indeed, even if a community actively supported a product yesterday, it may be abandoned the next day, leaving users without crucial updates.
Next, it’s not a ready-to-use solution. To ensure proper functioning with data sources, connectors are required to convert incoming events into a compatible format for further processing.

These challenges are inherent to open-source SIEM systems and cannot be completely avoided. It is up to each company to determine whether they are willing to accept these risks.


Open Source SIEM systems are not universally suitable for every company. On one hand, adapting open-source code to align with specific requirements necessitates a team of highly skilled IT professionals and significant financial resources. On the other hand, regulatory requirements often dictate the installation of certified software in most cases.

However, dismissing open-source tools entirely would be unwise. They can be employed as references when establishing requirements and preferences for paid SIEM solutions.

Exploring free open source SIEM tools: advantages and disadvantages

Aug 8, 2023 — 4 min read

This latest update demonstrates our focus on refining user experience and enhancing collaborative password management.

No longer will you need to create password copies in various vaults — we've introduced shortcuts. With these handy labels, you can easily organize access to passwords from different directories.

The new enhanced settings provide administrators with more control over configurations and user rights, and all changes require approvals, preventing any unintentional actions.

LDAP user management has now become simpler with its cleaner interface and background data updates.

In addition to that, Passwork 6.0 brings new notifications and interface improvements. All these enhancements contribute to a more comfortable user experience while ensuring the security of passwords and sensitive data.


Shortcuts are a new way to share passwords, enhancing collaboration flexibility. There's no need for creating password duplicates in different vaults — instead, create multiple shortcuts in required directories. All changes to original passwords are reflected in shortcuts, keeping your team up to date. Users can view or edit data via shortcuts according to their access rights.

Choose the directories where you would like to create shortcuts
View the complete list of shortcuts to passwords created in a specific vault

Sending passwords without granting partial access to vaults

Previous versions of Passwork encrypt passwords at the vault level. This type of encryption gives users partial access to vaults even when a single password is shared with them. Now, when users access passwords via their "Inbox" or a shortcut, they receive keys to specific passwords, but not their vaults.

Administrators can clearly see who has vault access rights, and who can only work with specific passwords.

Send passwords to users with necessary access rights
View the complete list of all passwords that were sent from a specific vault


The LDAP interface is now cleaner and more intuitive, with a reimagined user management logic. Adding new LDAP users is simpler and safer, especially with the client-side encryption enabled.

Previously, admins had to add an employee and provide a master password. Now, users set their master passwords upon the first login, and admins confirm them afterwards.

The "Users" tab shows registered users, and there is a separate window for adding new ones. LDAP user data updates take place in the background, allowing admins to navigate elsewhere without waiting for data refresh.

View your LDAP user list and add users to Passwork
Set up your LDAP integration in the updated interface

Passwork now provides more detailed security group information. The groups that are linked to roles are marked with special tags, and the groups which were not loaded from LDAP during the last update are marked as "Deleted", alerting admins to adjust the search settings or remove such groups. Also, you can now see the members of each security group.

Map your LDAP groups with Passwork roles and set up their automatic synchronization

Improved settings

We've redesigned all settings sections for a unified visual style and enhanced functionality, reimagined the logics of some settings.

Rights for links, tags, and password sharing. Previously, these settings were applied individually to each user. Now, they are applied to everyone with a certain level of vault access. For example, anyone with the “Edit” access rights or higher can create hyperlinks to passwords. These parameters are located in the system settings under the “Global” tab.

Change confirmation. We've added “Save” and “Cancel changes” buttons in system settings. Now, any changes to settings must be confirmed — this helps to prevent accidental actions.

Custom auto-logout time. Users can now set these parameters individually, and admins specify the maximum inactivity time period before automatic logout.

Language selection. In the new version of Passwork, admins can allow employees to choose their interface language.

Choose the required access level which will make it possible to send passwords, create links and shortcuts

Interface enhancements

Improved drag and drop. Now, when dragging and dropping passwords and folders into desired directories, Passwork displays selectable actions — move, copy, or create a shortcut.

Select folders and passwords, then drag and drop them to the required directory
Choose actions for the selected objects: move, copy, create shortcuts

Other improvements

Separate windows for access to the safe and additional access. Vault access info is now split into two easy-to-read windows. One window shows users who has access to a specific vault, and the other displays alternative ways passwords from this vault can be accessed — shortcuts, hyperlinks, or shared passwords.

Redesigned password action buttons. On the password panel, we've added the "Edit" button and grouped together all actions for additional password access via shortcuts, links, or direct user sharing.

Additional fields for password import and export. Passwork 6.0 supports the use of custom fields, that means you can transfer not only login and password but also additional information stored within password cards.

New notifications. Administrators will receive notifications about new unconfirmed users, and employees will be notified of new passwords in the "Incoming" section.

Introducing Passwork 6.0

Jul 21, 2023 — 4 min read

A Security Operations Center (SOC) is a critical hub for cybersecurity within organizations. It combines people, processes, and technologies to detect, analyze, and respond to security incidents. In this article, we will delve into the components that make up a SOC, starting with its basic systems, then moving on to heavier software tools, and finally exploring emerging technologies that hold promise for the future of SOC operations.

Basic systems

The foundation of any SOC lies in its basic systems, which provide fundamental capabilities for monitoring, analysis, and incident response. These systems include:

A Security Information and Event Management (SIEM) system: A SIEM tool collects and correlates data from various sources, such as logs, network traffic, and endpoint events. It helps identify security incidents and generates alerts for further investigation. SIEM systems provide a centralized view of security events, allowing SOC analysts to detect patterns and anomalies.

Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS): IDS and IPS monitor network traffic, searching for suspicious patterns or known attack signatures. IDS detects intrusions, while IPS can actively block or mitigate threats in real time. These systems play a crucial role in detecting and preventing unauthorized access and malicious activities within the network.

Vulnerability management systems: Vulnerability management systems scan and assess the organization's network, applications, and systems for vulnerabilities. They enable proactive identification and remediation of security weaknesses, reducing the risk of exploitation by attackers. These systems play a vital role in maintaining a secure infrastructure.

Log management systems: Logs are critical for forensic analysis and incident response. Log management systems collect, store, and analyze logs from various sources, providing valuable insights into security events. They help SOC teams investigate incidents, identify the root cause of security breaches, and ensure compliance with regulatory requirements.

Network Traffic Analysis (NTA) tools: NTA tools analyze network traffic at a granular level, identifying anomalies and potential threats. By monitoring and analyzing network traffic patterns, these tools help SOC teams detect and respond to suspicious activities. NTA tools enhance visibility into network behavior, allowing SOC analysts to identify sophisticated threats that traditional security systems may miss.

Heavier software

As threats become more sophisticated, SOC teams require advanced software tools to combat them effectively. Let’s take a look at some examples.

Threat intelligence platforms: Threat intelligence platforms aggregate data from various sources to provide up-to-date information about known threats, vulnerabilities, and indicators of compromise. They enhance incident detection and response capabilities by enabling SOC teams to proactively identify and mitigate potential risks. Threat intelligence platforms allow organizations to stay informed about emerging threats and adopt appropriate defense measures.

Endpoint Detection and Response (EDR): EDR solutions monitor endpoint devices for suspicious activities and potential threats. They provide real-time visibility, investigation, and response capabilities, helping SOC teams swiftly identify and contain incidents. EDR tools leverage behavioral analysis and threat intelligence to detect and respond to advanced threats, such as file-less malware and insider threats, at the endpoint level.

Security Orchestration, Automation, and Response (SOAR): SOAR platforms streamline and automate SOC processes, integrating various tools and technologies. They facilitate incident triage, investigation, and response, enabling faster and more efficient security operations. SOAR platforms automate routine tasks, allowing SOC analysts to focus on high-value activities like threat hunting and incident response.

User and Entity Behavior Analytics (UEBA): UEBA tools leverage machine learning algorithms to establish baseline behaviors for users and entities within an organization. They detect anomalous activities, such as insider threats or compromised accounts, by analyzing behavior patterns. UEBA tools provide insights into user activities, helping SOC teams identify potential security incidents and mitigate risks.

Deception technologies: Deception technologies create decoys and traps within a network, luring attackers and diverting their attention. By interacting with deception assets, SOC teams can gather valuable threat intelligence and gain insights into attackers' techniques. Deception technologies complement traditional security measures by providing early detection and response capabilities.

Looking forward

The evolving threat landscape calls for constant innovation in the field of cybersecurity. Several technologies show promise for enhancing SOC capabilities in the future. Let’s take a look at a few.

Artificial Intelligence (AI) and Machine Learning (ML): AI and ML techniques are already being utilized in various aspects of cybersecurity. They can aid in threat detection, anomaly detection, and behavior analysis, enabling more proactive and accurate identification of security incidents. AI and ML algorithms can analyze vast amounts of data and identify patterns that human analysts may miss, improving the efficiency and effectiveness of SOC operations.

Advanced analytics: Advanced analytics techniques, such as predictive analytics and behavioral analytics, can provide deeper insights into security events and help identify emerging threats. By analyzing historical and real-time data, SOC teams can uncover hidden connections and predict future attack trends. Advanced analytics empower SOC analysts to make informed decisions, prioritize threats, and allocate resources effectively.

Cloud-based security: As organizations increasingly adopt cloud infrastructure, SOC operations will need to adapt accordingly. Cloud-native security solutions, including Cloud Access Security Brokers (CASBs) and Cloud Security Posture Management (CSPM) tools, are emerging to address the unique challenges of cloud environments. These solutions provide visibility, control, and compliance assurance across cloud services, ensuring that organizations can protect their data and applications effectively.

Internet of Things (IoT) security: With the proliferation of IoT devices, SOC teams will face the challenge of securing these endpoints. Future SOC technologies should incorporate specialized IoT security solutions that monitor and protect connected devices. IoT security platforms can detect and mitigate IoT-specific threats, such as device tampering, unauthorized access, and data exfiltration. These technologies enable SOC teams to secure the expanding landscape of IoT devices within organizations.

Quantum computing: Quantum computing has the potential to revolutionize cryptography and threat intelligence analysis. With its immense computational power, quantum computers may help SOC teams tackle complex cryptographic algorithms and facilitate faster threat analysis. Quantum-resistant encryption algorithms and quantum-enabled threat detection techniques may become crucial components of future SOC operations.


A well-equipped SOC comprises basic systems, advanced software, and future technologies. The basic systems form the foundation, providing essential monitoring and analysis capabilities. Heavier software tools enhance incident response and detection, allowing SOC teams to stay ahead of evolving threats. Looking ahead, emerging technologies like AI, advanced analytics, cloud-based security, IoT security solutions, and quantum computing hold the potential to revolutionize SOC operations, enabling organizations to protect their assets and data more effectively in an ever-changing cybersecurity landscape.

Exploring the components of a Security Operations Center (SOC): basic systems, advanced software, and future technologies

Jul 19, 2023 — 3 min read

Symmetric algorithms, forming the backbone of modern cryptography, offer a secure method of encrypting and decrypting data utilizing a single shared key. They have been widely adopted for their unmatched speed and efficiency. Like any other technology, symmetric algorithms come with their own set of benefits and drawbacks. This article seeks to offer a comprehensive review of the pros and cons of symmetric algorithms, providing a deeper understanding of their integral role in data security and the potential challenges they entail.

Pros of symmetric algorithms

Unrivaled efficiency

Symmetric algorithms are best known for their superior efficiency in handling large volumes of data for encryption and decryption. The use of a single key significantly reduces the demand for computational resources, setting symmetric algorithms apart from their asymmetric counterparts. This makes them an excellent fit for applications that demand high-speed data processing, including secure communication channels and real-time data transfers.

Impressive speed

Symmetric algorithms, by virtue of their simplicity, can process data at a much faster rate than asymmetric algorithms. Without the need for complex mathematical operations, such as prime factorization or modular arithmetic, symmetric algorithms can encrypt and decrypt data rapidly, reducing latency. This speed advantage is particularly beneficial for applications requiring swift data encryption, including secure cloud storage and virtual private networks (VPNs).

Key distribution

Symmetric algorithms simplify the key distribution process. Given that both the sender and receiver utilize the same key, they only need to execute a secure key exchange once. This offers increased convenience in scenarios where multiple parties need to communicate securely, such as within large organizations, military operations, or corporate communications.

Computational simplicity

Symmetric algorithms are relatively straightforward to implement due to their computational simplicity. This allows for efficient coding, making them ideally suited for resource-constrained devices that possess limited computational capabilities, such as embedded systems or Internet of Things (IoT) devices. This simplicity also contributes to easier maintenance and debugging, reducing the potential for implementation errors that could compromise security.

Cons of symmetric algorithms

Complex key management

The management and distribution of shared keys are significant challenges inherent to symmetric algorithms. The security of these algorithms is closely tied to the confidentiality of the key. Any unauthorized access or compromise of the key can lead to a total breach of data security. Consequently, robust key management protocols are essential, including secure storage, key rotation, and secure key exchange mechanisms, to mitigate this risk.

Lack of authentication

Symmetric algorithms do not inherently provide authentication mechanisms. The absence of additional measures, such as digital signatures or message authentication codes, can make it challenging to verify the integrity and authenticity of the encrypted data. This opens the door for potential data tampering or unauthorized modifications, posing a considerable security risk.


Symmetric algorithms face challenges when it comes to scalability. Since each pair of communicating entities requires a unique shared key, the number of required keys increases exponentially with the number of participants. This can be impractical for large-scale networks or systems that involve numerous users, as managing a vast number of keys becomes complex and resource-intensive.

Lack of perfect forward secrecy

Symmetric algorithms lack perfect forward secrecy, meaning that if the shared key is compromised, all previous and future communications encrypted with that key become vulnerable. This limitation makes symmetric algorithms less suitable for scenarios where long-term confidentiality of data is crucial, such as secure messaging applications.

An in-depth analysis of symmetric algorithms

Symmetric algorithms, including the widely adopted AES, DES, and Blowfish, are favored for their speed and efficiency. However, their robustness is largely dependent on the size of the key and the security of the key during transmission and storage. While larger keys can enhance security, they also increase the computational load. Thus, selecting the appropriate key size is a critical decision that requires a careful balance between security and performance requirements.

One of the standout strengths of symmetric encryption is its application in bulk data encryption. Because of their speed, symmetric algorithms are ideally suited for scenarios where large amounts of data need to be encrypted quickly. However, they may not always be the best solution. In many cases, asymmetric encryption algorithms, despite their higher computational demands, are preferred because of their additional security benefits.

It's also crucial to note that cryptographic needs often go beyond just encryption and decryption. Other security aspects, such as data integrity, authentication, and non-repudiation, are not inherently provided by symmetric algorithms. Therefore, a comprehensive security scheme often uses symmetric algorithms in conjunction with other cryptographic mechanisms, such as hash functions and digital signatures, to provide a full suite of security services.

Final thoughts

Symmetric algorithms occupy a pivotal place in the realm of cryptography. Their efficiency and speed make them an invaluable asset for many applications, especially those involving large-scale data encryption. However, the limitations inherent in symmetric algorithms, including key management complexities, lack of authentication, and absence of perfect forward secrecy, necessitate meticulous implementation and the incorporation of additional security measures. Therefore, the decision to utilize symmetric algorithms should be made based on a thorough understanding of these pros and cons, as well as the specific requirements of the system in question.

Pros and cons of symmetric algorithms: ensuring security and efficiency

Jul 3, 2023 — 4 min read

The marvels of modern computing are, in part, thanks to advances in artificial intelligence. Specific breakthroughs in large language models, such as OpenAI's GPT-4 and Google's BERT, have transformed our understanding of data processing and manipulation. These sophisticated models masterfully convert input data—whether it be text, numbers, or more—into a form that machines can understand. This intricate process, known as data encoding, serves as the foundation for these models to comprehend and generate human-like text. Let's delve deeper into the intricacies of data encoding and how it powers the magic of AI language models.

The secret code of machines

The beginning of the journey involves comprehending how GPT-4 or BERT processes sentences typed into them. Contrary to human processing capabilities, these models can't directly interpret words. Instead, they employ something known as word embeddings. This complex yet efficient technique transforms each word into a unique mathematical form—akin to a secret code decipherable only by machines. Each encoding is meticulously performed to ensure that semantically similar words receive comparable codes. The aim is to create a rich, multidimensional landscape where each word's meaning is determined by its location relative to other words.

The role of positional encoding in context understanding

While individual words carry their importance, the structure of language extends beyond isolated entities. The sequence of words, the context, can drastically alter the meaning of a sentence. To illustrate, consider the phrases "Dog bites man" and "Man bites dog." The same words are used, but their arrangement creates entirely different narratives. That's where positional encoding enters the picture. By assigning each word an additional code indicating its position in the sentence, positional encoding provides models with a vital understanding of language structure and syntax.

The attention process: making words context-aware

After word and positional encoding, these mathematical representations, or word embeddings, undergo an 'attention' mechanism. Here, each word embarks on a figurative group discussion with all the other words in the sentence. During this interaction, each word decides the importance it should attribute to the others. For instance, in the sentence "Jane, who just moved here, loves the city," the word "Jane" would assign significant attention to "loves."

These 'attention' weights are then used to compute a new representation for each word that is acutely aware of its context within the sentence. This batch of context-aware embeddings journeys through multiple layers within the model, each designed to refine the model's understanding of the sentence. This systematic processing prepares the model to generate responses or predictions that accurately reflect the intended meaning of the sentence.

GPT-4: writing text one word at a time

GPT-4 has adopted a unique approach when it comes to generating text. It operates on a "one word at a time" principle. Beginning with an input, it predicts the next word based on the preceding context. This predicted word is then included in the context for predicting the following word, and the process repeats. This strategy allows GPT-4 to produce text that is not just grammatically coherent, but also semantically relevant, mirroring the way humans write one sentence after another.

BERT: a 360-degree view of sentence context

BERT, on the other hand, possesses a distinct capability that sets it apart from other models. It can process and understand text in both directions simultaneously. BERT does not limit itself to considering words before or after a given word. Instead, it absorbs the entire context at once, effectively offering a 360-degree view of the sentence. This bidirectional understanding enables BERT to comprehend the meaning of words based on their complete context, significantly enhancing the model's ability to interpret and generate nuanced responses.

The versatility of data encoding

While language forms a significant chunk of these models' use cases, they aren't confined to it. An exciting feature of models like GPT-4 and BERT is their ability to work with any kind of sequential data. This characteristic opens up a universe of possibilities for diverse fields, from composing harmonic music to decoding complex genetic sequences, predicting stock market trends, or even simulating game strategies. By analyzing patterns in the sequential data, these models can unearth hidden insights and produce creative outcomes, making them an invaluable asset in numerous areas beyond language processing.

Expanding horizons: applications and future prospects

The wonders of data encoding do not stop with text generation. In fact, the potential applications of these AI models are continually expanding. They can be used to aid human decision-making in complex scenarios, such as medical diagnosis or legal analysis, by digesting massive amounts of textual data and making informed suggestions. In the field of research, they can help summarize lengthy academic papers or generate new hypotheses based on existing literature. The entertainment industry isn't left out either, as these models can create engaging content, ranging from writing captivating stories to generating dialogues for video games.

Moreover, GPT-4 and BERT's remarkable abilities to understand and manipulate language are catalyzing research into other AI models. Researchers are exploring ways to combine the strengths of various models and reduce their limitations, which promises an even more exciting future for AI.


In conclusion, data encoding in AI models like GPT-4 and BERT can be likened to watching a symphony of processes working in perfect harmony. From word embeddings and positional encoding to attention mechanisms, these models leverage a series of intricate techniques to decode the hidden patterns in data, transforming it into meaningful information. The incredible capability of these models to understand context, generate human-like text, and adapt to diverse data types is revolutionizing the field of artificial intelligence, paving the way for a future brimming with AI innovations.

How large language models encode data

Jun 30, 2023 — 4 min read

The intricate dance between spies and encryption has been played out over thousands of years, and its rhythm continues to quicken. As we delve deeper into this dance, we see a confluence of technology, secrecy, and intelligence that has shaped the course of history and continues to impact our world today.

A brief history of encryption

Encryption, in its essence, is about transforming information into a code that is unreadable to anyone except the intended recipient. The Ancient Greeks and Romans were among the first to understand this concept. They used simple substitution ciphers, where each letter in a message would be replaced by another letter. The Caesar Cipher, used by Julius Caesar for secure military communications, is a classic example. This cipher was a simple shift of the alphabet, and while it was easy to break, it underscored the vital principle of encryption: the ability to shield information from prying eyes.

Fast forward to the World Wars, and we see the stakes rising. Encryption technology became more advanced and intertwined with the art of espionage. The Enigma machine, utilized by the Germans during World War II, is a well-known example of this progression. The Enigma was an electromechanical device that used a set of rotating disks to scramble plaintext messages into complex ciphertext. The machine's complexity made it a powerful tool for clandestine communication, and cracking its code was a task that required immense intellectual prowess.

The British cryptanalysts at Bletchley Park, led by Alan Turing, took on this monumental challenge. Their efforts to break the Enigma code were not just a victory for cryptography but also a critical factor that contributed to the Allied victory. It was espionage and counter-espionage at its finest, with encryption at the heart of the battle.

The Cold War saw the battleground shift, but the importance of encryption remained the same.

Cryptic case studies

In the last few years, we've seen several noteworthy instances where spies have used encryption and other covert tactics to achieve their goals. Let's explore five recent cases:

  1. Walter Glenn Primrose and Gwynn Darle Morrison were apprehended for a most unusual crime: the theft of deceased infants' identities. The case took a peculiar twist when a photograph was discovered featuring the duo dressed in what was purportedly KGB uniforms. The objective behind this odd photograph remains shrouded in ambiguity. However, this case casts a spotlight on the extraordinary lengths to which spies will resort to obfuscate their true selves, potentially utilizing encryption as a tool to further conceal their covert activities.
  2. In another instance, the veneer of Russian intelligence's machinations to sway U.S. elections was peeled back with the indictment of Alexandr Viktorovich Ionov. This indictment served as a window into the operational mechanics of the Russian FSB, divulging their deployment of an array of stratagems, including the use of encryption and disinformation campaigns, to sow seeds of chaos within the United States and undermine the foundations of global democracy.
  3. A narrative involving U.S. nuclear engineer Jonathan Toebbe and his spouse, Diana, unfolded as the pair tried to profit from selling purloined U.S. Navy nuclear documents and designs in exchange for cryptocurrency. Their surreptitious operation was derailed by an FBI sting operation, underscoring the pivotal role of counterintelligence mechanisms and encryption in the intricate world of modern espionage.
  4. In 2022, European nations initiated a large-scale crackdown on Russian intelligence operations that resulted in the expulsion of 556 Russian intelligence officers and diplomats. This widespread ejection served to hinder Russia's influence operations on European soil and stifle their capacity to manage and support their spy networks. This incident underscores the concerted global efforts being taken to neutralize encrypted espionage activities.
  5. China remains a prominent figure on the stage of global espionage, relentlessly targeting Western technological knowledge and silencing dissent within the Chinese diaspora. One particularly significant case involved Yanjun Xu, an intelligence officer who set his sights on the U.S. aerospace sector. Xu's subsequent 20-year prison sentence serves as a sobering testament to China's long-haul espionage strategy, the role of encryption in facilitating covert operations, and the international cooperative efforts required to counteract such threats.

Reflecting upon both historical and contemporary instances, one can see the intricate duality that encryption presents in espionage. As a tool, it simultaneously serves as a means for covert agents to veil their communications, maintaining the secrecy that their roles require. Simultaneously, it is the shield that protects sensitive data from falling into the wrong hands, acting as a safeguard against the very spies who would employ it for their own purposes.

As technology surges forward, the tactics and techniques of spies and cryptographers mirror this progression. This perpetual cycle of action and counteraction — the relentless pursuit to devise impenetrable codes on one hand, and the counter-effort to decode them on the other — encapsulates the ongoing relationship between espionage and encryption. This dance, marked by strategic maneuvering, continuous adaptation, and intellectual rigor, escalates in intensity with each passing moment, as each side ceaselessly strives to outsmart the other.

Summing up

In conclusion, the interplay between espionage and encryption is a nuanced ballet of complexity and evolution. Encryption serves dual roles in this dance: as a tool that facilitates the clandestine operations of spies, and as a defense mechanism against such covert activities. As we continue to ride the wave of technological advancement, the ties between espionage and encryption will undoubtedly become more intricate, more convoluted, and more pivotal. Whether the scenario involves state-sponsored cyber-espionage aiming to disrupt nations or individuals attempting to monetize state secrets, encryption remains a critical element in these engagements, forming the core of this ongoing battle.

Spies and encryption: a dance of secrecy and technology

Jun 28, 2023 — 5 min read

The globe has become profoundly reliant on technology, information, and the web. Although this has optimized and made business processes more efficient, it has also given rise to severe issues like cyber threats. The frequency of cyber attacks is escalating at a distressing pace.

Studies indicate that globally, every organization is subjected to over nine hundred cyber assaults on a weekly basis. This has culminated in a plethora of both concrete and abstract losses for organizations.

With the surge in cyber attacks, there is a corresponding rise in the need for experts in cyber security. Organizations are in dire need of specialists who can shield them from these onslaughts. Consequently, career opportunities in cybersecurity are burgeoning at an unprecedented rate in the United States.

As time has passed, cybersecurity has burgeoned into a sector replete with highly specialized roles that offer lucrative remuneration. Each role comes with its own set of requirements, competencies, and perspectives. Let’s delve into an overview of the eight most remunerative careers in the realm of cybersecurity.

Information security analyst

Information security analysts play a pivotal role in instituting cyber security protocols within a company or organization. A prime example of their responsibilities is the installation of firewalls. These firewalls act as a critical bulwark, providing an augmented shield to safeguard the organization's network.

In addition to installing firewalls, information security analysts wear multiple hats. They are involved in perpetually monitoring the organization’s networks for any security breaches and investigating violations when they occur. They are also tasked with creating and executing plans to combat potential security incidents and bolster the organization's security posture.

Moreover, they frequently need to stay abreast with the latest trends and developments in information security to ensure that the organization's security measures are up-to-date. This includes not only understanding the technical aspects but also the regulatory compliance and best practices to safeguard sensitive information.

When it comes to remuneration, the average baseline salary for an information security analyst in the United States is approximately $93,861 annually. However, this figure can vary based on factors such as location, level of experience, education, and the size and industry of the employer. Experienced analysts or those working in sectors with higher security demands may command higher salaries.

Cloud consultant

In the United States, a cloud consultant typically earns an average annual salary of around $127,105. Their role is primarily centered on working with cloud storage systems. Their responsibilities encompass the development, deployment, and maintenance of cloud applications, workflows, and services. Moreover, they rigorously analyze the organization’s data.

Through meticulous scrutiny of the data and understanding of the business requirements, they deduce the most appropriate cloud solution tailored to the organization’s needs. In addition to identifying the optimal cloud solutions, they also serve as advisers in the domain of cloud security. They meticulously evaluate the array of cloud services leveraged by the organization and proceed to suggest solutions that can bolster the security framework.

Furthermore, cloud consultants often engage in facilitating the migration of an organization's data and applications to the cloud. They are instrumental in ensuring a seamless transition while minimizing downtime and mitigating risks.

Given their expertise, they also provide insights and recommendations on cost management strategies, scalability, and disaster recovery plans within the cloud environment. Their role is essential for organizations to capitalize on the benefits of cloud computing while ensuring data integrity and security.

Penetration tester

Penetration testers serve as invaluable assets to organizations by pinpointing and rectifying security vulnerabilities through the execution of simulated cyberattacks. These professionals, often termed “ethical hackers,” mimic the tactics of malicious hackers in a controlled environment to evaluate the security infrastructure. In the United States, they typically earn an average annual salary of approximately $127,170.

Post the simulated attacks, penetration testers meticulously analyze the data to identify potential weak points in the system. Based on their assessments, they recommend and implement robust security measures designed to thwart actual cyberattacks. Their insights are crucial in fortifying the organization’s defense mechanisms.

Organizations that handle sensitive, personal, or classified information regard penetration testers as indispensable. Industries such as healthcare, finance, and government, which are especially sensitive to data breaches due to the nature of the information they manage, are more likely to employ penetration testers. These professionals might be hired for a specific project or be an integral part of the in-house cybersecurity team.

Network security architect

In the United States, a network security architect typically garners an average annual income of around $130,028. These professionals shoulder the critical responsibility of safeguarding an organization's network infrastructure.

A network security architect’s role encompasses designing, deploying, and rigorously testing networks to ascertain that they are impervious to cyberattacks and that security protocols are adhered to. This involves crafting network structures that are resilient and implementing cutting-edge security technologies to mitigate risks.

Additionally, network security architects play a vital role in the evolution of the organization's Local Area Network (LAN), Wide Area Network (WAN), and other data communication networks. They make sure these networks are not only secure but also efficient and scalable to accommodate the organization’s evolving needs.

Application security engineer

In the United States, application security engineers typically earn an average annual salary of around $126,391. These professionals play an integral role in guaranteeing that an organization’s software products function securely and dependably. Additionally, they extend their expertise to safeguard the organization's network and data repositories.

Collaboration is at the heart of the role of an application security engineer. They work hand-in-hand with software developers and product managers in a concerted effort to plan, enable, and bolster security implementations aimed at fortifying applications and software products. This involves integrating security measures throughout the software development lifecycle.

Their responsibilities include performing code reviews to identify vulnerabilities, implementing encryption and other security features, and ensuring compliance with industry security standards. They also design and conduct security tests to evaluate the resilience of applications against various attack scenarios.

Director of information security

The Director of Information Security holds a high-ranking position within an organization. In the United States, individuals in this role can expect an average base salary of approximately $206,475 annually. Additionally, they often receive yearly bonuses, which further enhance their earnings. The primary responsibility of a Director of Information Security is to devise and cultivate strategies aimed at bolstering the organization’s cybersecurity posture.

In their capacity, they assume a leadership role in managing and supervising a multitude of elements that make up the organization’s cybersecurity blueprint. This encompasses the creation and enforcement of security policies, conducting risk assessments, and ensuring compliance with regulatory standards.

Beyond developing strategies, they often have a bird's-eye view of the organization's security landscape and work closely with other departments to integrate cybersecurity measures into the broader organizational objectives. This sometimes involves communicating with the board of directors or other stakeholders to ensure they are aware of the security risks and measures in place.

Final words

It is evident that the cybersecurity field boasts an array of lucrative career opportunities, with even entry-level positions commanding attractive compensation. As one gains experience and demonstrates proficiency, there is a commensurate escalation in remuneration. If you’re seeking a rewarding and thriving career, the cybersecurity domain is ripe with possibilities, making the present moment an ideal time to venture into this sector.

To gain a foothold in the cybersecurity industry, it is imperative to possess relevant certifications or a degree in cybersecurity. Numerous educational institutions, including colleges and universities, offer a range of programs in this field. It is advisable to explore and enroll in a program that aligns well with your career aspirations and preferences.

The 6 highest-paid professions in cybersecurity

Jun 9, 2023 — 4 min read

Biometric data refers to physical or behavioral characteristics that can be used to recognize a person. Indeed, in today's world, biometric data has become a widely used method of identifying individuals. Some examples of biometric data include fingerprints, facial recognition, and iris scans. Biometric data is now being used in a variety of applications, including in passports. Passports with biometric data are now the norm in many countries, and they offer several advantages over traditional passports. However, they also come with their own set of risks, including the possibility of being hacked.

The primary purpose of biometric data on a passport is to improve security and reduce the likelihood of identity fraud. The biometric data on a passport is unique to each individual and is difficult to replicate or forge. This makes it much harder for someone to use a fake passport or assume someone else's identity. In addition, biometric data can be used to speed up the passport control process, reducing wait times at airports and border crossings.

However, biometric data on passports is not foolproof. Hackers can potentially access this data and use it for their own purposes. For example, they may be able to use the data to create fake passports or to steal someone's identity. This is a major concern for many governments and individuals, as the consequences of identity theft can be severe.

Ways in which the biometric data on passports can be hacked

There are several ways in which biometric data on passports can be hacked. One method is through the use of skimming devices. Skimming devices can be used to steal the data on a passport's RFID chip, which contains biometric data. These devices can be hidden in public places, such as airports or train stations, and can be used to steal data from unsuspecting individuals. Once the data has been stolen, it can be used to create fake passports or to steal someone's identity.

Another way in which biometric data can be hacked is through cyberattacks. Cybercriminals can use various methods to gain access to a passport database and steal the biometric data contained within it. This data can then be sold on the dark web to other criminals or used to create fake passports. Cyberattacks can also be used to alter or delete data in the passport database, which can cause chaos and confusion for governments and individuals alike.

One example of biometric data being hacked is the 2014 breach of the US Office of Personnel Management. In this breach, hackers were able to steal sensitive data, including the biometric data of millions of government employees. This data included fingerprints, which can be used to identify individuals. The breach was a significant blow to US national security, and it highlighted the vulnerability of biometric data.

Another example of biometric data being hacked is the 2019 breach of Suprema, a biometric security company. In this breach, hackers were able to access the biometric data of millions of people, including fingerprint and facial recognition data. This data was being used by various organizations for security purposes, and the breach was a major concern for those who had entrusted their biometric data to Suprema.

The risks of biometric data being hacked are significant, as the consequences can be severe. For example, if a criminal gains access to someone's biometric data, they can potentially use it to create fake passports, steal their identity, or commit other crimes. This can result in financial loss, legal troubles, and damage to one's reputation.

How to protect biometric data on passports

To protect biometric data on passports, individuals and governments need to take steps to minimize the risk of it being hacked. One key step is to use encryption to protect the data while it is being transmitted and stored. Indeed, Encryption is a process of encoding data so that it can only be accessed by authorized parties with the appropriate decryption key. By encrypting biometric data on passports, the risk of it being intercepted or stolen by unauthorized parties is reduced.

Another important step is to improve cybersecurity measures to prevent cyberattacks. This includes implementing firewalls, using secure passwords, and regularly updating software and security protocols. It is also important to educate individuals about the risks of biometric data being hacked and how to protect themselves.

In addition, individuals can take steps to protect their own biometric data. This includes being vigilant about suspicious activity, such as phishing emails or phone calls that ask for personal information. It is also important to keep passports and other sensitive documents in a safe place and to report any lost or stolen passports immediately.

Despite the risks associated with biometric data on passports, it is important to note that it remains one of the most secure methods of identification available. While no security system is foolproof, the use of biometric data can significantly reduce the risk of identity fraud and improve security at airports and border crossings. By taking steps to protect biometric data, individuals and governments can minimize the risks of it being hacked and ensure that it remains a secure method of identification for years to come.


Biometric data on passports offers several advantages over traditional passports, including improved security and faster passport control. However, it also comes with its own set of risks, including the possibility of being hacked. Hackers can use various methods to access biometric data, including skimming devices and cyberattacks. Governments and individuals need to be aware of these risks and take steps to protect their biometric data. This may include using encryption, improving cybersecurity measures, and being vigilant about suspicious activity. The consequences of biometric data being hacked can be severe, and it is up to all of us to take steps to prevent it from happening.

Biometric data on your passport — can it be hacked?

Jun 7, 2023 — 4 min read

The International Space Station (ISS) represents one of humanity's greatest achievements in space exploration and scientific research. While its primary purpose is to facilitate scientific advancements and international cooperation, the ISS also plays a crucial role in advancing cybersecurity. In this article, we will explore the four main ways in which the ISS ensures  cybersecurity and the unique challenges and opportunities it presents in the realm of securing information and communication in space.

Taking advantage of an isolated environment

The ISS operates in a unique environment that is isolated from the Earth's surface. This isolation offers inherent advantages for cybersecurity. Due to the absence of an atmosphere, the ISS is shielded from many terrestrial cybersecurity threats such as physical attacks or electromagnetic interference. This isolation enables the creation of a controlled and secure network environment, which is crucial for ensuring the confidentiality, integrity, and availability of data and communication systems on board.

The isolation of the ISS from the Earth's surface provides a physical barrier against unauthorized access. The absence of a direct physical connection with Earth significantly reduces the risk of physical attacks, such as tampering with hardware or stealing sensitive information. This isolation also eliminates the risk of electromagnetic interference from Earth-based sources, which can disrupt communication systems and compromise data integrity. By leveraging this isolated environment, the ISS establishes a foundation for strong cybersecurity measures.

Developing a secure communication infrastructure

The ISS relies on a robust and secure communication infrastructure to establish connections with ground stations, enabling real-time communication between the astronauts and mission control. The communication channels are designed with strong encryption algorithms to protect sensitive information from interception and tampering. Secure protocols and authentication mechanisms ensure that only authorized personnel can access the systems and data on board the ISS. These measures prevent unauthorized access and help safeguard critical systems and scientific data from cyber threats.

The secure communication infrastructure on the ISS employs encryption algorithms, such as Advanced Encryption Standard (AES), to protect the confidentiality of transmitted data. This ensures that any intercepted information remains unreadable and unusable to unauthorized individuals. Additionally, secure protocols like Secure Shell (SSH) and Transport Layer Security (TLS) are used to establish encrypted connections between the ISS and ground stations, ensuring the integrity of data transmission. By implementing strong encryption and authentication mechanisms, the ISS establishes a secure communication framework that safeguards critical information from cyber attacks.

Fully-fledged redundancy and resilience

The ISS is equipped with redundant systems to ensure that even if one component fails, others can seamlessly take over, preventing disruptions to critical operations. Redundancy also extends to the communication infrastructure, with backup systems in place to ensure continuous connectivity. These redundancy measures help protect against cyber attacks and ensure the continued operation of vital systems, even in the face of potential threats.

The space environment poses various risks to hardware and software systems, including radiation, microgravity, and extreme temperatures. These factors increase the likelihood of system failures and can make the ISS vulnerable to cyber attacks. The ISS is designed to address this with redundant systems and backup mechanisms. If one component malfunctions or is compromised, alternative systems can take over seamlessly, maintaining the integrity of critical operations. This redundancy enhances the resilience of the ISS's cybersecurity infrastructure, minimizing the impact of cyber threats and ensuring the continuous functionality of essential systems.

Allowing international cooperation

The ISS is a prime example of international collaboration, with multiple nations working together toward common goals. This cooperation extends to cybersecurity efforts as well. Partner nations share their expertise and best practices to enhance the cybersecurity measures implemented on the ISS. Collaborative initiatives, such as information sharing and joint cybersecurity exercises, strengthen the collective ability to detect, prevent, and respond to cyber threats. By fostering international cooperation, the ISS contributes to the development of global cybersecurity standards and practices.

International cooperation in cybersecurity is crucial due to the interconnected nature of space missions and the shared responsibility of ensuring the security of the ISS. Partner nations exchange knowledge and expertise in areas such as threat intelligence, vulnerability assessments, and incident response to enhance the overall cybersecurity posture of the space station. By working together, nations can pool resources, share insights, and collectively address emerging cyber threats.

Furthermore, international cooperation enables the leveraging of diverse perspectives and experiences. Each partner nation brings its unique expertise and approaches to the table, contributing to a more comprehensive understanding of cybersecurity challenges and solutions. This collaborative environment fosters innovation and enables the development of advanced technologies and strategies to mitigate cyber risks on the ISS.

Research and development

The ISS offers a unique environment for research and development in cybersecurity. Scientists and engineers can conduct experiments to better understand the effects of radiation, microgravity, and other space-related factors on hardware and software systems. This research helps in designing and implementing more resilient and secure technologies, not only for space missions but also for terrestrial applications. The knowledge gained from these experiments contributes to the advancement of cybersecurity practices, benefiting industries and governments worldwide.

The extreme conditions of space, such as radiation and microgravity, pose challenges to the durability and functionality of hardware and software systems. Conducting research on the ISS allows scientists to study the effects of these conditions on cybersecurity measures and develop innovative solutions. For example, experiments can be performed to test the resilience of encryption algorithms in the face of radiation-induced errors or to evaluate the performance of intrusion detection systems in a microgravity environment. The findings from these studies can be used to improve the design and implementation of cybersecurity technologies, making them more robust and effective in both space and terrestrial applications.

Moreover, the research and development conducted on the ISS can contribute to the advancement of cybersecurity knowledge in general. The unique experiments and studies conducted in the space environment provide insights and data that can enhance our understanding of cyber threats and vulnerabilities. This knowledge can be shared with the broader cybersecurity community, leading to the development of new techniques, tools, and best practices that can be applied to protect systems and data both in space and on Earth.


The International Space Station plays a vital role in advancing cybersecurity through its isolated environment, secure communication infrastructure, redundancy measures, international cooperation, and research opportunities. By leveraging these advantages, the ISS serves as a platform for innovation and collaboration, strengthening cybersecurity practices both in space and on Earth. As we continue to explore and expand our presence in space, the lessons learned from securing the ISS will undoubtedly shape the future of cybersecurity, ensuring the protection of critical systems and information in an increasingly interconnected world.

How the international space station ensures cybersecurity

Jun 5, 2023 — 4 min read

The importance of healthcare data security solutions within the healthcare industry lies in safeguarding confidential patient information and ensuring compliance with regulations such as those outlined by HIPAA. In the past, protecting patient data was relatively straightforward, as it involved physical records stored in filing cabinets.

However, with the advent of technology and the digital era, patient records are now predominantly stored electronically on computers, servers, and storage devices. This shift brings heightened vulnerabilities to data breaches, malware, viruses, and other malicious attacks.

Contemporary healthcare professionals, including nurses, doctors, and other medical staff, heavily rely on technologies like computers and tablets to access, update, and record patient data. Furthermore, data sharing between multiple healthcare facilities and providers has become commonplace. Consequently, robust healthcare data security solutions become imperative to mitigate the risks associated with malicious data breaches and technical failures.

What is data security?

Data security refers to a range of precautionary measures implemented to safeguard and uphold the integrity of data. In the context of healthcare operations, the aim of data security is to establish a robust plan that maximizes the security of both general and patient data.

Healthcare institutions, such as Veterans Affairs (VA) hospitals, face heightened vulnerability to cyberattacks as hackers seek to obtain personal information for the purpose of committing medical fraud. It is crucial for healthcare organizations to meticulously assess potential causes of data breaches and devise comprehensive security solutions that address internal and external risk factors.

What are some factors that pose risks to healthcare data?

Healthcare organizations should be aware of various risk factors when developing data security solutions for their operations. These factors include, but are not limited to:

Utilization of outdated / legacy systems

Outdated operating systems, applications, and legacy systems create vulnerabilities that make it easier for hackers to access healthcare data. Since these systems are no longer supported by their creators, they lack proper security. Upgrading to newer and more secure systems is advisable.

Email scams with malware

Phishing scams have become increasingly sophisticated, often mimicking emails from familiar sources such as vendors or suppliers. Opening such emails or clicking on embedded links can result in malware installation, granting hackers access to healthcare data. It is crucial to educate employees about the importance of vigilance and avoiding suspicious emails.

Insufficient training in data security practices

When employees, contractors, vendors, and others lack proper training, they may unknowingly violate security protocols. It is vital to provide comprehensive training to all new staff members and regularly review and verify compliance with current data security practices among all employees.

Failure to maintain constant data security

Negligence in securing workstations is a common cause of data insecurity. Employees leaving workstations unlocked allows unauthorized individuals to access and steal data. Emphasizing the importance of locking workstations or enabling auto-locking features after brief periods of inactivity is crucial.

What factors contribute to the increased vulnerability of the healthcare sector to data breaches?

The healthcare industry faces a higher risk of data attacks compared to other sectors due to several key factors. Firstly, the nature of the data collected and stored by healthcare organizations is a significant factor. These organizations possess highly detailed patient records containing personal information such as names, dates of birth, addresses, social security numbers, and payment account details.

The extensive collection of such sensitive data in the healthcare sector inherently heightens the risk of data attacks. Moreover, healthcare data holds a greater value in illicit markets in comparison to other stolen data types. Consequently, it is of utmost importance for institutions like VA hospitals to implement robust data security solutions to mitigate these risks.

What types of security solutions should be employed for safeguarding healthcare data?

The choice of healthcare data security solutions depends on various factors such as data storage methods, the types of data collected, and the retention period. Generally, it is crucial to have comprehensive security measures in place that encompass protocols for patients, employees, contractors, vendors, and suppliers.

To ensure data protection, it is essential to tightly control data access permissions based on a need-to-know basis. For instance, patient insurance information and billing records should only be accessible to individuals responsible for processing insurance claims and managing patient balances.

Similarly, patient records containing diagnoses, treatment plans, and prescriptions should only be accessible to attending physicians, nurses, and other relevant healthcare professionals, with access granted on a case-by-case basis for specific data requirements.

Several common types of data security solutions can be implemented, including:

Data backup and recovery solutions

Regularly back-up data to secure servers, such as portable NAS servers, ensuring offsite storage for added security.

Data encryption

Employ encryption techniques when transferring data between workstations, servers, the internet, or cloud-based systems to ensure the highest level of protection.

Anti-virus / Malware / Spyware apps

Utilize appropriate applications to safeguard systems from viruses, malware, and spyware, and regularly update them.

System monitoring apps

Deploy monitoring applications to track file access, updates, creations, movements, and deletions, as well as to detect potential data breaches or unauthorized access and changes to user accounts.

Multi-factor authentication

Implement multi-factor authentication methods to enhance data security, requiring users to provide their username, password, and additional verification items like one-time passcodes sent to their email or mobile phones.

Ransomware protection

Employ specialized applications to protect workstations and servers from ransomware attacks, which can compromise data access and demand a ransom for restoration.

Employee training

Conduct regular training sessions with employees to ensure they are equipped with the necessary knowledge and precautions for safeguarding patient records, data, and confidential information.

It is important to note that the aforementioned list provides sample security solutions that can be employed to protect patient data, employee data, proprietary information, and other vital data within healthcare organizations.


The importance of healthcare data security solutions cannot be overstated within the healthcare sector. The shift from physical records to digital systems has introduced new vulnerabilities, necessitating the implementation of robust data security measures. Safeguarding confidential patient information and ensuring compliance with regulations like HIPAA is of utmost importance.

The healthcare industry faces various challenges to data security, including outdated systems, phishing scams, internal threats, weak wireless network security, inadequate password practices, lack of training, and insufficient maintenance of data security protocols. Addressing these challenges requires the adoption of suitable security solutions.

Effective security measures involve strict control of data access permissions, regular data backup and recovery, data encryption, utilization of anti-virus/malware/spyware applications, deployment of system monitoring tools, implementation of multi-factor authentication, adoption of ransomware protection mechanisms, and comprehensive employee training.

By embracing these measures, healthcare organizations can mitigate the risks associated with data breaches, protect patient data, and uphold the integrity of their operations. Prioritizing data security is crucial for establishing trust, preserving patient privacy, and upholding the highest standards of healthcare.

The significance of healthcare data security solutions

May 30, 2023 — 4 min read

Wearable technology has become increasingly popular in recent years, with devices like the Apple Watch, Fitbit, and Xiaomi gaining significant market share. These devices offer a wide range of features, from tracking fitness goals to monitoring health data and staying connected with the digital world. However, with the rise of wearable tech, concerns have been raised about how secure these devices are and whether they put user data at risk.


The types of data that wearable tech collects vary from device to device. Some devices, such as fitness trackers, collect basic health data such as heart rate and activity level. Other devices, such as smartwatches, can collect more sensitive data such as location, messages, and emails. This data is typically stored on the device and synced to the cloud, making it accessible from any device that the user is logged into.

One of the most significant concerns with wearable tech is the security of this data. If the device or cloud storage is not properly secured, the data could be accessed by hackers who could use it for malicious purposes. For example, hackers could use location data to track users' movements or steal their identity by accessing their personal information.

To protect user data, most wearable tech companies implement security measures. For example, the Apple Watch uses encryption to protect data stored on the device, and it requires a passcode or biometric authentication to access the device. Similarly, Fitbit and Xiaomi use encryption to protect user data and offer security features such as two-factor authentication.

Despite these measures, data breaches in wearable tech have unfortunately become more common in recent years, highlighting the need for continued vigilance when it comes to security measures. For example, in 2017, researchers discovered a vulnerability in the Bluetooth communication protocol used by many wearable devices. This vulnerability, known as BlueBorne, allowed hackers to take control of devices and steal sensitive data.

In 2019, it was discovered that several popular fitness apps had inadvertently exposed sensitive user data. The apps, which were used in conjunction with wearable fitness trackers, had failed to properly secure user data, leaving information like usernames, passwords, and exercise routines vulnerable to attack.

Another example of a wearable tech data breach occurred in 2018 when hackers stole sensitive data from the Polar fitness app. The data, which included location data, was collected by the app's users and stored on Polar's servers. However, the servers were not properly secured, allowing hackers to access the data and track the movements of military personnel and intelligence agents in sensitive locations.

How to protect your data and privacy?

To protect your data and privacy when using wearable tech, there are several steps you can take. Firstly, always use strong, unique passwords for your wearable tech accounts, and consider using a password manager to help generate and manage these passwords.

Secondly, ensure that your device's software is always up-to-date with the latest security patches and updates. This will help to protect against known vulnerabilities and ensure that any new security features are in place.

Thirdly, be aware of the data that your wearable tech is collecting and where it is being stored. Check the privacy policy of your device and app to understand how your data is being used and shared. If you are concerned about your privacy, consider disabling certain features or opting out of data sharing.

Then, consider using a VPN (Virtual Private Network) to protect your online activities when using wearable tech. A VPN encrypts your internet connection, making it more difficult for hackers to intercept your data. VPNs are particularly useful when using public Wi-Fi, which is often unsecured and vulnerable to hacking.

Lastly, it's essential to know how to identify potential threats and scams that may target wearable tech users. This can include phishing emails or fake apps that may trick users into disclosing their personal information or installing malware on their devices.

Wearable tech in the workplace

Another important consideration when it comes to wearable tech security is the role of wearable tech in the workplace. Wearable tech is increasingly being used in workplace environments, where it can offer benefits like tracking employee productivity and health. However, it is essential to ensure that wearable tech devices used in the workplace are properly secured and that sensitive workplace data is not put at risk.

Organizations can take several steps to protect workplace data when using wearable tech.

Firstly, companies should develop a clear policy on the use of wearable tech in the workplace, outlining the acceptable use of these devices and the security measures that should be in place.

Secondly, companies should invest in secure wearable tech devices that offer robust security features, such as encryption and two-factor authentication. This will help to protect sensitive workplace data from unauthorized access and reduce the risk of data breaches.

Thirdly, organizations should provide training for employees on how to use wearable tech devices securely. This could include information on the importance of using strong passwords, keeping devices up-to-date with the latest security patches and updates, and being aware of potential security threats.

Finally, companies should implement monitoring and control measures to ensure that wearable tech devices used in the workplace are being used appropriately and that sensitive data is not being put at risk.


Overall, wearable tech devices offer many benefits for both individuals and organizations, but it is essential to take steps to protect data and privacy. By following the best practices outlined above, individuals and companies can minimize the risks associated with using wearable tech devices and enjoy the many benefits that these innovative devices offer.

How secure is wearable tech?

May 26, 2023 — 4 min read

Digital Rights Management (DRM) is a technology that is used to control the use and distribution of digital content, including music, movies, e-books, and software. The primary purpose of DRM is to ensure that digital content is only used in ways that are authorized by the copyright owner. DRM technology works by placing restrictions on the use of digital content, which are then enforced through encryption, digital signatures, or other methods.

DRM systems typically involve the use of software that is integrated with the content. This software is designed to control how the content is used and to prevent unauthorized access to the content. DRM systems can also be integrated with hardware devices, such as DVD players or e-book readers, to ensure that the content is only used in authorized ways.

One of the most common methods of implementing DRM is through the use of encryption. When digital content is encrypted, it is transformed into a code that cannot be understood without a key. The key is typically stored on a server, and it is used to decrypt the content as and when it is needed. DRM systems can also use digital signatures to authenticate the content and to ensure that it has not been tampered with.

DRM systems are designed to be flexible so that they can be customized to meet the needs of different types of digital content and different types of users. For example, a DRM system for music may allow users to play the music on a limited number of devices, while a DRM system for software may allow users to install the software on a single device.

DRM technology is utilized to protect a wide range of digital content, including entertainment media such as books, music, and videos, as well as sensitive business data, database subscriptions, and software programs. DRM helps content creators and copyright holders control how their work is used and prevent unauthorized changes or misuse.

Here are some examples of DRM in action:

iTunes. Apple's iTunes store uses DRM to limit the number of devices customers can use to listen to songs. The audio files purchased from iTunes contain information about the purchase and usage of the songs, which prevents access from unauthorized devices. Additionally, the content in the iBooks store is protected by FairPlay technology, which ensures that books can only be read on iOS devices.

Digital Music. Spotify uses blockchain technology and DRM to identify songs played and pay the right artist through cryptocurrency. The music streaming company acquired Mediachain to assist in this process.

Microsoft Software. before downloading Microsoft software, such as Windows or Office, users must accept the company's user license and enter a key. Microsoft also uses a kind of DRM technology called PlayReady to secure the distribution of content over a network and prevent unauthorized use of its software.

Sensitive Documents. Many organizations use DRM to protect business-critical documents and sensitive information, such as confidential employee data, business plans, and contracts. DRM allows organizations to track who has viewed files, control access, and manage usage, as well as prevent alteration, duplication, saving, or printing.

Regulatory Compliance. DRM is important for organizations to comply with data protection regulations, such as HIPAA for healthcare organizations and CCPA and GDPR for all organizations.

Despite the benefits of DRM, there are also some criticisms of the technology. Some users argue that DRM restricts their ability to use digital content in ways that they feel are reasonable and legitimate. For example, they may feel that they should be able to transfer a purchased song from one device to another or to make a backup copy of a digital book.

Additionally, DRM systems can be vulnerable to hacking and other forms of attack. If a DRM system is compromised, it can allow unauthorized access to the content, which can undermine the purpose of the DRM system. This has led some users to view DRM as an unnecessary restriction on their ability to use digital content and as a threat to their privacy and security.

Another criticism of DRM is that it can make it difficult for users to access their digital content in the future. For example, if a user switches from one device to another, they may find that their DRM-protected content is not compatible with their new device. Additionally, if the company that provides the DRM system goes out of business or discontinues support for the system, users may be unable to access their content.

Despite these criticisms, DRM remains an important tool for protecting the rights of copyright owners and for ensuring that digital content is used in authorized ways. DRM systems have been used by a wide range of companies, including music labels, movie studios, and software publishers, to control the use and distribution of their digital content.

In recent years, some companies have started to move away from DRM, recognizing that it can be a barrier to the adoption of digital content. For example, some music labels have started to offer DRM-free music downloads, recognizing that users are more likely to purchase music if they are not restricted in their ability to use it. Additionally, some e-book publishers have started to offer DRM-free books, recognizing that users may be more likely to purchase books if they are not restricted in their ability to use them.

However, despite these trends, DRM remains an important tool for many companies, especially for those that want to ensure that their digital content is used in authorized ways. DRM is particularly important for companies that are concerned about piracy, as it can help to prevent unauthorized copying and distribution of their content.

In conclusion, DRM is a technology that is used to control the use and distribution of digital content. DRM systems work by placing restrictions on the use of digital content and enforcing these restrictions through encryption, digital signatures, or other methods.

While DRM has its benefits, including the protection of the rights of copyright owners, it also has its criticisms, including restrictions on the use of digital content and the potential for hacking and other forms of attack. Nevertheless, despite these criticisms, DRM remains an important tool for many companies and is likely to continue to be used in the future.

What is digital rights management (DRM) and how does it work

May 24, 2023 — 4 min read

The importance of protecting the safety and security of our digital devices and the data stored on them has grown significantly as technology continues to advance and become more integrated into our everyday lives. The efficacy of antivirus software, as well as the role it plays in protecting users from online dangers, has come under close examination in recent years. This report goes into the present status of cybersecurity, the limitations and advantages of antivirus applications, and the alternative solutions that are available for defending your devices and data in 2023.

Understanding the modern cyber threat landscape

The landscape of cyber threats has expanded at an exponential rate over the past several years, with attacks becoming extremely advanced, diverse, and targeted. The term "cyber threats" no longer just refers just to viruses; they now include a wide variety of assaults, including the following:


A type of malware that encrypts a victim's files and demands a ransom in exchange for a decryption key.

Phishing attacks

Fraudulent attempts to obtain sensitive information, such as login credentials or financial data, by masquerading as a trustworthy entity.

Zero-day exploits

Attacks that take advantage of previously unknown vulnerabilities in software or hardware, giving developers no time to create and distribute patches.

Advanced persistent threats (APTs)

Long-term, targeted cyberattacks that often involve multiple attack vectors and are typically aimed at high-value targets, such as governments and large corporations.

Because cybercriminals are using more sophisticated strategies, it is essential for antivirus software and other cybersecurity solutions to evolve at the same rate in order to maintain their efficacy.

The limitations of traditional antivirus software

Traditional antivirus software primarily relies on signature-based detection, a method that compares files and programs against a database of known malware signatures. This strategy may be useful against recognized dangers, but it suffers from a number of limitations, including the:

Inability to detect new or unknown malware

Signature-based detection struggles to identify new malware variants or previously unknown threats, leaving users vulnerable to emerging cyber risks.

Slow response to new threats

Updating signature databases to include new malware often takes time, resulting in a window of vulnerability.

False positives and negatives

Signature-based detection can produce false positives (identifying benign files as malware) and false negatives (failing to detect actual malware), affecting the overall accuracy and effectiveness of the antivirus software.

The emergence of next-generation antivirus (NGAV) solutions

To address the limitations of traditional antivirus software and better combat the evolving threat landscape, the cybersecurity industry has developed next-generation antivirus solutions. NGAV products employ a combination of advanced techniques, such as:

Behavioral analytics

Monitoring the behavior of applications and processes to detect anomalies indicative of malicious activity, even if the malware itself is unknown or has no known signature.

Machine learning

Utilizing algorithms that learn from previous experiences to identify patterns and characteristics of malware, allowing for more accurate detection and classification.

Artificial intelligence

Incorporating AI to enhance threat detection capabilities and adapt to the ever-changing cyber threat landscape.

These advanced techniques make it possible for Next-Generation Antivirus (NGAV) solutions to offer protection that is more proactive and effective against new cyber threats.

Adopting a multi-layered security strategy

Although NGAV solutions represent a significant improvement over traditional antivirus programs, relying solely on a single security solution is insufficient. A multi-layered security approach, combining multiple tools and strategies, is essential for comprehensive protection in 2023. Key elements of a robust cybersecurity strategy include:

Regular software updates

Timely updates to your operating system and applications ensure that known vulnerabilities are patched, reducing opportunities for cybercriminals to exploit them.

A firewall

A strong firewall helps prevent unauthorized access to your network, serving as the first line of defense against potential intruders.

Security awareness training

Regular training and education for users about potential threats and best practices for online safety are crucial in preventing successful attacks, such as phishing and social engineering.

Data backup

Regularly backing up your data ensures that, in the event of a successful attack, you can recover quickly and minimize potential losses.

Endpoint detection and response (EDR)

EDR solutions provide advanced threat detection and response capabilities, monitoring your devices and network for signs of compromise.

Multi-factor authentication (MFA)

Implementing MFA adds an extra layer of security to your online accounts, making it more difficult for attackers to gain unauthorized access.

Network segmentation

Separating your network into smaller segments can help contain potential breaches and limit the spread of malware.

Vulnerability management

Regularly scanning your network and devices for vulnerabilities and addressing them promptly can significantly reduce your risk of cyberattacks.

Do you need antivirus software in 2023?

Given the complexities of the modern threat landscape, maintaining a robust cybersecurity posture is more critical than ever. Traditional antivirus software alone may not offer sufficient protection, but implementing next-generation antivirus solutions and adopting a multi-layered security approach can significantly enhance your defenses.

In conclusion, the question should not be whether you need antivirus software in 2023, but rather which solution best fits your needs and how it can be integrated into a comprehensive security strategy. By staying informed about emerging threats and continually adapting your defenses, you can reduce your risk of falling victim to cyberattacks and protect your valuable data and devices.

As a final note, it is crucial to remember that cybersecurity is not a one-size-fits-all solution. Depending on the nature of your online activities and the sensitivity of the data you handle, your security needs may differ. Regularly evaluating your cybersecurity measures and adapting them as needed will help ensure that you are adequately protected in the ever-evolving digital landscape of 2023.

Navigating the cybersecurity landscape in 2023: do you need antivirus?

May 22, 2023 — 4 min read

The vast majority of individuals look for nearby WiFi hotspots on their laptop computers. The most up-to-date personal computers have the ability to instantly recognize whether a wireless connection is present. But what if you don’t want to go through the effort of unpacking your laptop, starting it up, logging on, and then strolling about with it just to discover that there is no wireless network coverage in the area where you are? Because of this, having a WiFi sniffer on hand may be quite helpful in assisting you in rapidly locating the nearest WiFi network.

What is a WiFi sniffer?

A WiFi sniffer is a portable instrument that can locate the wifi network that is closest to the user. In addition to this, it will assist you in determining the strength of the WiFi signal, and if there are many signals, a WiFi sniffer will prioritize the signals in terms of their strength, which will save the user both time and annoyance.

WiFi sniffers are available in a variety of device formats, either as a stand-alone appliance or as software add-ons that interact with your wireless mobile device. You may get a WiFi sniffer in any of these two ways. The add-ons may often be obtained through the service provider of your mobile device, although standalone WiFi sniffers can be purchased at any store specializing in the sale of computers or electronic goods.

WiFi sniffers are able to handle all kinds of wireless network cards and come included with a Prism2 driver that may assist you in determining the intensity of the signal. WiFi sniffers are developed using C++ programming and operate using an "n-tier" structure. The gathering of data begins at the lowest tier and progresses up to the topmost layer, which is where the user interface is placed.

How does a WiFi sniffer work?

The operation of a WiFi sniffer does not call for a significant amount of technical expertise and may be carried out with relative ease. If you are using the standalone device, which is compact and portable, all you need to do is push and hold down the main button, aim it in any direction where you wish to locate a wireless network, and then use the gadget. This will activate the gadget, and while it is looking for a signal, the device will display lights that revolve in a circular pattern. The indicator light will remain constant after it finds a wireless signal. This means that it has detected a wireless network that is within a range of 300 feet of your device.

In the event that a signal cannot be found, the rotating lights will continue to do so, and it may be required to attempt to look in a different direction. If you are in an area where the wireless signal is poor, it is possible that it may be necessary to make adjustments to the antenna. WiFi sniffers that are add-ons to your mobile device do their functions in a manner that is analogous to that of the standalone device. Nonetheless, their specific capabilities change depending on the sort of mobile device that you are utilizing.

WiFi sniffer features

Just like there are many various brands of wireless devices available on the market, there are also many different brands of WiFi sniffers, each of which has functions that vary depending on the model of the device. While searching for a high-quality WiFi sniffer, one of the qualities you should look for is a device that is capable of avoiding interference from other electronic gadgets and appliances, such as mobile phones, Bluetooth devices, and microwave ovens. The better-grade devices will have the ability to capture a wireless signal regardless of the environment that they are placed in.

The most effective kind of WiFi sniffer often operates on the 2.4GHz frequency band and has the capacity to identify WiFi signals for both 802.11b and 802.11b/g. It needs to be easy to carry around with you, be convenient, and be small enough to fit in your pocket or laptop bag.

WiFi sniffers that have the qualities described above are also useful tools for usage in the house. Whether you are setting up a home office or Internet-ready gadgets, a WiFi sniffer can assist you in determining the areas of your home with the strongest wireless signal so that you can optimize the signal's capabilities and get the most out of your home office or devices.

Isn’t a WiFi sniffer illegal?

No. Stumbling across a wireless network by using a WiFi sniffer is a much simpler process than wireless sniffing, as the two terms refer to quite distinct activities. A WiFi sniffer's sole purpose is to discover the location of the nearest accessible wireless connection; however, it cannot actually join the network itself. Eavesdropping on conversations taking place inside of wireless networks is what's known as wireless sniffing, and it's a technique that's intended to break into a network. The former is permitted but the latter is not, particularly when it is utilized in the commission of illegal conduct.

On the other hand, wireless sniffing is entirely within the law when it is employed by IT employees to monitor network incursions on a business or government infrastructure. This is the case whether the infrastructure is owned by a private company or the government. Monitoring the information packets that are sent over a network is what's referred to as "intrusion detection," and it's used to protect sensitive data from being stolen by hackers and other malicious users of the internet.

There are a few businesses that have made it illegal for customers to use WiFi sniffers, which are devices that randomly search for available wifi networks. Instead, corporations are deploying apps that are based on directories and that can be allowed for usage by mobile employees as well as other people.

Hopefully, the material presented here will assist you in gaining a deeper comprehension of WiFi sniffers and the functions they perform. In short, they’ll help you save time and stress while maximizing the signal strength of your home wireless network.

How WiFi sniffers work?

May 18, 2023 — 6 min read

The majority of individuals don’t put much thought into the kind of web browser that they use. Typically, laptops or smartphones are equipped with a default browser like Microsoft Edge or Safari, leading people to assume it's the finest or sole choice available. Nevertheless, there are several other browser options to select from.

Your web browser is the medium through which you communicate with the majority of the internet, resulting in a substantial amount of personal information being managed by it. It is essential to ensure that you are using a secure browser since this data is highly valuable.

So, how much is your data worth? To marketing firms — quite a bit. Companies can sell your browsing data to third parties for profit, and that's just the start of it. Hackers are always on the lookout for people who are not using a secure browser, and exposing personal data in this manner can be incredibly risky.

Your browser and its ability to protect your privacy and security are critical. As a result, let's go through the top five secure browsers for 2023.


In 2023, Firefox is considered one of the best web browsers as it is secure, open-source, and offers numerous customization options. Its high level of customization makes it an excellent choice for advanced users, yet it is also user-friendly, making it a great option for non-tech-savvy users.

Firefox blocks third-party tracking cookies automatically, resulting in faster browsing speeds than other browsers that allow websites to track user activity, like Chrome. It also features various security measures, such as anti-phishing and malware protection, minimal data collection, automatic tracker blocking, and encrypted browsing with DNS over HTTPS (DoH). It is also compatible with third-party security extensions.

Firefox's anti-phishing protections are impressive, as it is highly effective in detecting risky and known phishing sites when tested against a database of such sites. Additionally, Firefox's DoH protections encrypt search queries with CloudFlare or NextDNS's encrypted DNS servers, making it challenging for third parties to steal browsing history.

Although many highly secure browsers compromise convenience for protection, Firefox is simple to use and provides advanced security features. Users can adjust security settings, anti-tracker settings, and anti-phishing protections according to their preferences. Firefox is compatible with Windows, macOS, Android, and iOS.

Tor Browser

In terms of user privacy, Tor Browser is the top choice; however, it is not as fast as most of its competitors.

The name "Tor" is derived from "The Onion Routing," a technology that hides the user's IP address by encrypting web traffic and routing it through multiple servers. As a result, before a user's computer can access a website, their traffic must first pass through Tor's secure server network. Tor has been shown to conceal user activity from ISPs, hackers, trackers, and even governments. The NSA was reportedly unable to hack into the Tor network, as stated in Edward Snowden's leaked documents. Tor Browser is banned in certain countries that censor the internet because it provides users with unrestricted access to the web.

Tor's data collection policy is minimally intrusive, as it only collects usage data to assess browser performance. Despite being an advanced browser, Tor Browser's interface is user-friendly, and it uses the same source code as Firefox, with minor variations. Users can even install most Firefox extensions into Tor Browser. However, browser extensions increase the likelihood of machine identification by network surveillance tools, so users who wish to remain as private as possible should avoid using them.

While Tor Browser is highly secure, its onion routing technology will slow down the internet connection, similar to the effect of using a VPN. When users' traffic bounces off multiple servers, their connection speed is adversely affected. Nonetheless, Tor may be the ideal choice for users with a reliable internet connection who is willing to trade some speed for high security. Tor Browser is compatible with Windows, Android, macOS, and Linux.


Brave is a web browser that offers a fast browsing experience and comes with built-in ad and tracker-blocking features. With its "Shields" feature, Brave can automatically block ads and trackers, which allows it to load websites much faster than other browsers. This feature also provides an added layer of protection by blocking malicious web scripts that may try to infiltrate your device. In addition, Brave automatically sets up HTTPS connections, which use a secure encryption protocol to protect user traffic.

One of Brave's standout features is its ability to use Tor technology in Private Browsing mode, which encrypts your traffic through the Tor network. This ensures that your browsing activity remains hidden not only from other users on your device but also from your ISP and other network spies.

Brave also has a unique ad-buying program called Brave Rewards, which allows users to earn BAT (a type of cryptocurrency) by viewing or clicking on sponsored ads. These BATs can then be transferred to the sites and content creators of your choice. This program offers a great revenue solution for content creators as Brave ads generate revenue without using trackers, selling user data, or pop-ups that interrupt the browsing experience. Brave is available for Windows, Android, iOS, macOS, and Linux.

Google Chrome

The reason why Google Chrome is the most popular browser in the world is that it is compatible with all major platforms and provides users with an excellent interface as well as thousands of useful extensions. Google, with its large number of staff and resources, constantly updates and patches Chrome more quickly than any other browser developer to patch network vulnerabilities, man-in-the-middle attacks, browser glitches, and exploitable security holes.

Chrome's Safe Browsing feature uses Google's extensive database of unsafe sites to flag suspicious web pages, which is updated daily and detects more phishing sites than most other browsers. Additionally, Chrome uses sandboxing to prevent malicious web scripts and invasive trackers from stealing data or hacking devices. Users can choose DNS over HTTPS (DoH) protection in Chrome's settings for added privacy and protection from ISPs, governments, and network-snooping hackers, which is turned on by default in Firefox but only requires a single click in Chrome.

It's important to mention that Chrome's tracker blocking is limited due to Google's reliance on web trackers to gather user data for advertisers. Chrome collects user data by default, and while much of this data is used to enhance Chrome's security, it's also shared within the entire Google ecosystem, including advertisers and potentially even governments. Despite this, Chrome has many trackers and ad-blocking plugins available for security-oriented users, such as Avira Safe Shopping. Although Chrome may be one of the most secure browsers, it's also one of the worst for user privacy. Google Chrome is available for Windows, macOS, Android, iOS, and Linux.

Microsoft Edge

Microsoft Edge is a vast improvement compared to its predecessor, Internet Explorer. Edge is a user-friendly, Chromium-based browser that boasts robust security tools, including Edge SmartScreen anti-phishing technology, which detects phishing sites more effectively than Chrome in tests.

In addition to its security features, Edge also offers a simple tracker-blocking system that has three levels: Basic, Balanced, and Strict. The Strict setting blocks most trackers and cookies, including those necessary for some sites to function. In contrast, the Balanced setting performed best in tests, detecting and blocking the most invasive cookies. This makes it much easier to manage online privacy than in Chrome, where the options are limited to

"Allow All," "Block Third-Party," and "Block All."

Like Chrome and Firefox, Edge now supports DNS over HTTPS by default, which enhances user privacy when browsing the web. Microsoft Edge is available for Windows, macOS, Android, and iOS.


It can be difficult to determine whether a browser is truly secure or not, but the browsers mentioned in this article offer a good level of privacy. While this is a great start, for the most secure browsing experience, I suggest using a combination of a secure browser and a virtual private network (VPN). A VPN adds an extra layer of protection to your online activity by encrypting your entire Internet connection, making it much more difficult for anyone to intercept your data or monitor your browsing habits.

By using a quality VPN, you can also hide your real location and appear to be browsing from a different location altogether. This can be especially useful for accessing content that may be restricted in your country or region. With a secure browser and a VPN, you can enjoy a more private and secure browsing experience, free from the prying eyes of hackers, governments, and other third parties that may be trying to track your online activity.

Best safety browsers in 2023

May 16, 2023 — 5 min read

In an era where cybercrime is rampant, businesses must take a proactive approach to safeguard their confidential information. In 2021 alone, over 118 million people have been affected by data breaches, and this number is expected to rise exponentially.

In this post, we’ll discuss some of the best practices for businesses to protect themselves from cyber threats.

Always have a back-up

A good backup system is one of the best ways to maintain computers’ security and protect your business’s data. Regularly backing up important files can help ensure that you don’t lose any information if a cyber incident or computer issue occurs. Here are some tips on how to effectively back up your data:

  • Use multiple backup methods. Have an effective backup system by using daily incremental backups to portable devices or cloud storage, end-of-week server backups, quarterly server backups, and yearly server backups. Remember to regularly check and test whether you can restore your data from these backups.
  • Use portable devices. Consider using external drives or portable devices such as USB sticks to store your data. Store the devices separately offsite, and make sure they are not connected to the computer when not in use to prevent malicious attacks.
  • Utilize cloud storage solutions. Cloud storage solutions are a great way of backing up all your important information. Choose a solution that provides encryption for transferring and storing your data and multi-factor authentication for access.
  • Practice safe backup habits. Make it a habit to regularly back up your data, not just once but multiple times throughout the week or month, depending on the type of information you’re backing up. Additionally, it’s important to practice safe backup habits, such as keeping your devices away from computers when not in use and regularly testing that your data is properly backed up.

Train your employees

To protect your business from cyber threats, educating your employees about the risks and how to stay safe is essential. Training should focus on identifying phishing emails, using strong passwords, and reporting any suspicious activity immediately to the IT department.

Ensure that everyone is up-to-date with the latest threats and strategies for protection by conducting regular cybersecurity training sessions with all of your employees. Provide helpful resources such as tips for creating secure passwords, methods for spotting phishing attempts, and steps for safely sharing confidential information online.

Putting this emphasis on education and training will help create an environment of alertness so that any potential risk can be identified quickly and addressed appropriately.

Password management

Weak passwords are one of the most common entry points for cyber attackers, so using a secure password and password manager is essential to keep your business safe.

A password manager is a tool that allows you to store and manage all your passwords securely, with only one strong master password needed to access them all. Here are some tips for creating strong passwords and using a reliable password manager:

  • Create strong passwords. Choose passwords that include numbers, symbols, upper-case letters, and lower-case letters. Avoid using personal information like birthdays or pet names in your passwords. Additionally, avoid using the same username/password combination for multiple accounts.
  • Use a password manager. A reliable password manager will help you create and store secure passwords. Be sure to select a trustworthy provider, as they will be responsible for protecting your data.

An on-premise password manager like Passwork is an excellent option for businesses that need to store passwords on their own servers. Passwork provides the advantage of having full control over your data and features like password sharing and a secure audit log.

  • Enable multi-factor authentication. Adding an extra layer of security to your accounts is easy with multi-factor authentication (MFA). MFA requires two or more pieces of evidence to authenticate the user's identity, such as passwords and biometric data. Most password managers can enable MFA for all your accounts, so be sure to take advantage of this feature.

Finally, make sure you update your passwords regularly and always keep them private. Following these tips will help ensure that you are protecting your business from cyber threats.

Securing your network

Using a Virtual Private Network (VPN) effectively protects your business's sensitive data and prevents unauthorized access to your network. A VPN creates an encrypted connection between your device and the internet, making it more difficult for hackers or malicious actors to intercept and access confidential information. Here are some tips on how to leverage a VPN for optimal security:

  • Research the best VPN providers for features that best suit the needs of your organization
  • Ensure that the provider meets industry standards such as AES 256-bit encryption
  • Set up two-factor authentication with users’ login credentials
  • Configure the VPN for reliable and secure connections
  • Monitor your network for any suspicious activity or unauthorized access attempts
  • Make sure to update the VPN software with new security patches regularly
  • Train users on the proper internet safety and best practices when using a VPN
  • Use an antivirus program and scan all devices connected to the network for malware threats

VPNs are not only important for protecting data and preventing unauthorized access but also for maintaining user privacy. By encrypting the data sent and received over the internet, your organization can ensure that any information stays secure and confidential.

Consistent vulnerability assessments are crucial

Organizations of all sizes must remain vigilant in mitigating cyber threats — and one of the best ways to do this is by conducting regular vulnerability assessments. This will help identify any potential weaknesses or vulnerabilities that could be used by malicious actors to gain access to your system, allowing you to patch and address them before they become a problem.

Here are a few steps to help get you started:

Develop an assessment plan for your organization

Before starting, it’s important to understand the scope and objectives of the vulnerability assessment. Define the overall goals and objectives before identifying any assets or systems that should be included in the assessment.

Identify and document threats

Once you have developed a plan, it’s time to begin searching for potential vulnerabilities within your system. You can use various open-source intelligence techniques, such as scanning public databases and researching known security issues with similar software versions or operating systems that are present in your system.

Create a testing environment

After potential threats have been identified and documented, you should create a safe testing environment to validate the vulnerability assessment results. Doing so will help ensure that any tests conducted do not adversely affect production systems.

Run automated scans

Following the creation of your secure test environment, it’s time to run automated scans on your organization's target systems or assets. This should include both internal and external scanning tools, such as port scanners, web application scanners, or configuration management tools, depending on the scope of the assessment.

Analyze scan results

Once the automated scans have been completed, it’s time to analyze the results and identify any potential issues or vulnerabilities. Assess any weaknesses present in order to prioritize and address them more effectively.

Develop a remediation plan

After identifying potential security issues, you should develop a remediation plan based on the risk level of each issue. This could include patching vulnerable systems, implementing new security measures, or restricting access to certain areas of your system, depending on the severity of the threat.

By conducting regular vulnerability assessments, organizations can stay ahead of cyber threats and ensure their systems remain secure.

Bottom line

Protecting your business from cyber threats should be a top priority for any organization. With the increasing prevalence of cybercrime and data breaches, implementing effective cybersecurity practices is more important than ever.

By regularly backing up important files, training employees on identifying and reporting potential threats, using a secure password manager, utilizing a VPN, and conducting consistent vulnerability assessments, businesses can significantly reduce their risk of falling victim to cyber-attacks.

5 ways to keep your business safe from cyber threats

Apr 13, 2023 — 5 min read

In recent years, the issue of user privacy has become more critical than ever before. With the rise of social media and other online platforms, companies are collecting vast amounts of user data, which can be used for various purposes. While some of these purposes may be benign, such as improving the user experience or providing targeted advertising, others may be more nefarious, such as selling user data to third parties or engaging in targeted surveillance.

There are now many apps that are activated by code words — they are called " marker words". These words can activate the listening function on your gadget covertly and completely invisibly. It can be not only "OK, Google" or "Hi, Siri", but also other completely unrelated words or sounds.

Perhaps you may have noticed Instagram advertising something you recently talked to your friends about even in real-time without holding your phone. If so, you know you're being bugged.

So, who's eavesdropping on us?


Facebook reportedly hired hundreds of third-party contractors to transcribe voice messages but stopped the practice in July 2019 after it was made public. The contractors were not always clear on why they were listening to certain conversations and did not understand how the messages were obtained. Facebook did not inform its users about this development, which involved the potential listening of personal voicemails by unauthorized individuals.


Microsoft's employees were reported to have listened to personal audio recordings made through Cortana and Skype Translator services. However, Microsoft did not deny this claim and instead included the information in the company's privacy policy. Microsoft believes in maintaining an honest relationship with its users and believes they have the right to know that their conversations may be overheard. Nonetheless, Microsoft did not previously disclose this information to its users, and it is possible that the company decided to proactively share the information as they had been listening to audio recordings for some time. This is in contrast to other companies that faced privacy violations but did not disclose their actions to their users.


It has been reported that contractors who test Apple's Siri voice assistant for accuracy may be listening in on users' private conversations. It should be noted that Siri can be activated by more than just the phrase "Hey, Siri" and can be triggered by similar-sounding words, background noise, or hand movements. This has resulted in Siri being inadvertently activated during private conversations, leading to the collection of personal information and recordings of private conversations, including those between doctors during commercial transactions. These recordings are often accompanied by data that can reveal the location or personal contacts of the users. Apple representatives claim to be working to address these concerns in order to protect users' personal information.


Over one thousand Amazon contractors are listening to voice recordings made in the homes and offices of Echo voice assistant owners. These contractors are required to sign non-disclosure agreements and are not allowed to discuss the program publicly. They work nine-hour shifts and analyze up to 1,000 sound recordings per shift, but even if they have concerns about what they hear, they are required to adhere to the non-disclosure policy. Amazon claims to take the security and privacy of its customer's personal information seriously, and employees do not have access to information that could identify a person or account directly. It is important to note that users can disable the use of their personal voice records for the development of new features in Amazon's Alexa privacy settings.


Google employs experts to listen to the voice commands given by users to its voice assistant. These recordings are made after the voice assistant has heard the phrase "Ok, Google" and can be made on smartphones using Google Assistant or on the Google Home smart speaker. Google shares snippets of these recordings between users and linguists around the world to improve the voice assistant, but claims to have access to no more than 0.2% of all user commands. The company has prohibited employees from transcribing conversations or other extraneous sounds. However, in June 2019, it was reported that a significant leak of audio recordings of users occurred, with over a thousand recordings, including personal conversations between parents and children, addresses, and work calls being exposed. Some recordings were made accidentally due to the assistant being activated by mistake. Google attributed the leak to the actions of one linguist and claimed to be investigating the matter.


Despite the concerns that these data collection practices raise, companies often argue that they are necessary to improve user experience and provide more personalized services.

However, many users remain skeptical of these claims and are increasingly concerned about the potential for abuse. For example, data breaches can expose user data to hackers and other malicious actors, potentially putting users at risk of identity theft and other forms of cybercrime. Additionally, governments and other organizations may use user data to engage in targeted surveillance, raising concerns about civil liberties and individual privacy.

In response to these concerns, governments and regulatory bodies have taken steps to regulate the collection and use of user data. In the European Union, the General Data Protection Regulations (GDPR) have strengthened data privacy laws and given users greater control over their data. In the United States, the California Consumer Privacy Act (CCPA) has similarly sought to protect user privacy by requiring companies to disclose what data they collect and allowing users to opt out of data sharing.

Despite these efforts, however, the issue of user privacy remains a contentious one. As technology continues to advance, companies will undoubtedly find new ways to collect and utilize user data, raising new concerns about privacy and security. It is therefore crucial that users remain vigilant and informed about the data collection practices of companies they interact with, and for governments and regulatory bodies to continue to monitor and regulate these practices to protect user privacy.

Are companies spying on their users in 2023?

Mar 28, 2023 — 5 min read

People frequently utilize various VPN servers at work. Off-the-shelf options are good, but we've come to learn that a personal VPN offers substantial benefits. To appreciate the benefits of creating your own VPN server over purchasing one, consider why VPNs are used in the first place:

•  To prevent others from intercepting your lines of communication

•  To circumvent access limitations to a specific resource in your own nation or a foreign one

•  Conceal personal information from the Internet provider (the owner of the WI-FI access point)

•  Leave your present location unidentified (don't forget time zones — this is the indicator that may readily pinpoint your location)

Everything is quite straight-forward here, so let's get down to the interesting stuff: what are the advantages of utilizing your own service, and how should you go about establishing one?

Well, today you’re in for a treat — to answer these questions, we’ve put together a checklist with step-by-step instructions for setting up and configuring a VPN server.

Advantages of Using a Personal VPN Server

1. Bypassing blocks

Several countries attempt to fight VPNs by blocking them. But, if you use your own VPN, it will not appear in the main list of providers and will almost surely avoid blocks.

2. There are no captchas

All well-known services will request that you choose horses from a set of photographs, locate traffic lights, or identify a word in a picture. Why is this the case? Several others are using a ready-made VPN server at the same time as we are. Consequently, the website will suspect such traffic and assault you with captchas. When you use your own VPN server, however, this problem is avoided since you will have a unique IP address that will look like an ordinary user.

3. High speed

Off-the-shelf VPN servers often have low bandwidth since they typically don't have time to grow their servers and networks for a big number of customers. With a self-hosted resource, you’ll have all the bandwidth you could possibly need.

4. The ability to send all computer traffic through a VPN, not just browser traffic

5.  No need to install third-party software

As you can see, having your own server solves the majority of the problems associated with using a VPN.

Checklist for creating your own VPN server

Take the example of DigitalOcean and its Droplet server.


If you already have a DigitalOcean account, you may go to the next stage. If not, you must first register (all the steps are intuitive, don’t worry).

Create a new Droplet that will function as a VPN server

Choose a data center from which you intend to connect to the internet. I selected to work with Frankfurt since it is physically closer to my country of residency, which improves working speed.

Choose Marketplace, and Docker on Ubuntu in the Image column. Finally, in the Size column, choose the subscription plan that suits you.

Next, put a name in Hostname, such as ‘vpn-server’. This has no effect and is simply for your convenience. Next, click the Create Droplet button.

Wait for the server to be created. This might take up to a minute. Following that, you will be given your server's IP address.

Connect to the SSH server

Launch Terminal on MacOS/Linux (or PowerShell/putty on Windows) and connect to our server through SSH using the root username and the IP address of our server.

This can be done with the help of:

ssh root@{your-ip-address}

> enter your password

After that, you have to connect.

Create a docker-compose.yml file

Just copy the code from this website and paste it into your file. This is your server configuration file.

You may create a file directly over SSH using console text editors (nano/vim) or with an SFTP client. I used SSH to access the console editor.

In the same SSH window, input the following:

> nano docker-compose.yml

Paste the content. In the added text, change the following parameters for yourself:

•  my-shared-secret — your secret word

•  my-username — your personal login

•  my-password — your password

Take note of how straightforward it is — there are just 14 lines in the file that we want.

Exit by pressing Ctrl+X, then Y, and then press Enter.

Run the container with the recently created server

Use the same SSH window in which we just created the file.

> docker compose up -d

Congratulations! Your VPN server is up and running. So, how do you connect it?

Connect to the created VPN server

We recommend using IPsec because the clients for this VPN are already built into MacOS/Windows and you don't have to install anything locally. You just need to create a new VPN connection with the following parameters:

•  Type: IPSec

•  Server address: enter the IP address of the server

•  Account name: write my-username (or the one you changed it to)

•  Password: add my-password (or the one you changed it to)

•  Shared Secret: write my-shared-secret (or the variant you changed earlier)

For MacOS, you don't need to install anything, just configure it like this:

For Windows, these settings will look a little different:

Unfortunately, Windows is not so simple and you will have to surf the registry and allow NAT-T.

For Linux users, there is also a screenshot with the required settings (I used them in Ubuntu 22.04):

Before setting up, you need to install the network-manager-l2tp-gnome package. This is done through the console:

> sudo apt-get install network-manager-l2tp-gnome

You can also connect from your phone, you don't need to install anything else. The settings on the iPhone look like this:

And that’s it — you're done! Connect and check the IP address, for example, on Whoer via the link. Now, for the whole Internet, you are physically located in the region where you created your VPN server, and the IP is the IP of the server. It's not as scary, time-consuming, or expensive as you might think.

Security recommendations

When it comes to the security of your server, I would, as a final thought, recommend:

•  Using an SSH key instead of a password

•  Changing the SSH-port from 22 to any other

•  Using a complex password and Shared-secret (preferably a randomly generated string)

How to create your very own VPN server

Mar 20, 2023 — 4 min read

Natural language processing (NLP) has made considerable strides in recent years, which has led to the creation of effective language models such as ChatGPT. ChatGPT was developed by OpenAI and makes use of cutting-edge machine learning algorithms to produce text answers that appear to have been written by humans. Concerns about its safety and how it may be abused are beginning to surface as its usage becomes more widespread. We’re aiming to provide a complete overview of ChatGPT's security by delving into its safety features as well as the possible threats that are involved with using it.

How ChatGPT works

It is vital to have an understanding of how ChatGPT operates in order to fully appreciate the security features that it offers. In its most fundamental form, ChatGPT has predicated on a deep learning architecture referred to as the Transformer. This design gives the model the ability to discover patterns and correlations in massive volumes of text data. Because the model has been trained on such a large dataset, which includes web pages, books, and articles, it is able to provide replies to user inputs that are pertinent to the context of those inputs.

Security measures in ChatGPT

OpenAI has put in place a number of preventative safeguards to guarantee the confidentiality and morality of the users of ChatGPT. These precautions include the following:

  1. Content Filtering: OpenAI has a content screening mechanism in place to prevent the creation of content that is unsuitable or potentially dangerous. This technique eliminates potentially harmful information by employing both computerized algorithms and human moderators, one after the other.
  2. User Authentication: Applications that use ChatGPT require user authentication, which restricts access to the system to only those who have been granted permission to do so. This precaution helps stop unauthorized access and lowers the possibility of harmful usage.
  3. Privacy Measures: OpenAI has a strong commitment to protecting the privacy of its users and ensures the safety of all data throughout storage and processing. In order to secure the personal information of its users, the company abides by severe data privacy requirements, such as the General Data Protection Regulation (GDPR).
  4. Continuous Improvement: OpenAI is constantly looking for feedback from users in order to enhance the safety and security functions of ChatGPT. The organization is better able to recognize possible dangers and take preventative measures to mitigate them if it keeps its lines of communication with the user community open and active.

Potential risks and misuse

Despite the security measures in place, ChatGPT is not without risks. Some of the potential dangers associated with its use include:

  1. Generating Misinformation: ChatGPT has the ability to create information that is either purposefully or accidentally misleading or erroneous. This danger is caused by the fact that the model is dependent on training data, which may contain information that is inaccurate or biased.
  2. Amplifying Harmful Content: Even if there are methods in place to screen out potentially hazardous information, there is still the risk that some of it may get through. It is possible that as a consequence of this, hate speech, the ideology of extreme conservatism, and other harmful stuff may be amplified.
  3. Privacy Breaches: The risk of data breaches continues to exist despite the implementation of stringent privacy protections. There is always the risk that cybercriminals would try to acquire unauthorized access to user data, which might result in privacy breaches.
  4. Social Engineering Attacks: ChatGPT's ability to generate human-like responses can be exploited by bad actors to conduct social engineering attacks. These attacks can involve impersonating trusted entities or individuals to manipulate users into revealing sensitive information or performing actions that compromise their security.

Mitigating risks

To minimize the risks associated with ChatGPT, both developers and users can take proactive steps. Some recommendations include:

  1. Regularly updating security measures: OpenAI has to regularly update and enhance its security procedures, taking into account comments from users and tackling new risks as they emerge.
  2. User education: It is essential to provide consumers with education about possible hazards and to encourage appropriate usage. This involves increasing awareness about disinformation, issues around privacy, and assaults using social engineering.
  3. Strengthening of content filtering: To successfully detect and remove hazardous information, OpenAI has to continue to improve the algorithms that power its content filtering system. This should be done by combining machine learning with human moderation.
  4. Collaboration with researchers and policymakers: OpenAI should actively collaborate with researchers, industry experts, and policymakers to develop best practices, guidelines, and regulations that ensure the responsible and secure use of ChatGPT. This collaboration can contribute to a broader understanding of the potential risks and help create a safer AI ecosystem.


The ChatGPT language model is a strong one that has a tremendous amount of promise for a wide range of applications. Although OpenAI has taken a significant number of precautions to assure its safety, there is still the possibility of threats. It is possible to significantly reduce the dangers associated with using ChatGPT so long as appropriate precautions are taken, such as providing users with adequate training, enhancing the algorithms used to filter material, and encouraging collaboration between academics and policymakers.

While utilizing ChatGPT or any other technology that relies on AI, it is essential for users to stay aware and practice care at all times. When it comes to ensuring the safe and responsible utilization of these effective instruments, having an awareness of the possible dangers and taking preventative measures to lessen or eliminate them may go a long way. By doing so, we will be able to use the promise of ChatGPT while also efficiently addressing concerns around security.

How secure is ChatGPT? Unveiling the safety measures and potential risks

Mar 3, 2023 — 7 min read

The digital era has provided us with numerous advantages. Handheld devices that we carry in our pockets allow us to connect instantaneously with people all over the world, shop for necessities, manage our accounts, conduct our jobs, and so much more.

However, because the internet has become so ingrained in our daily lives, it has also become a massive source of risk. Criminals seeking to steal money or information and endanger national security and stability have more tools than ever to use against us.

As a result, governments must examine cyberspace risks and take action to keep their citizens secure. However, as is often the case, certain governments and general society do better than others.

It is critical to learn which countries are doing well and which are not, as this can help you understand the dangers you encounter when traveling and which policies are effective and not.

Today, we've compiled a list of the five most cyber-secure countries and the five least cyber-secure countries.

The top 5 cyber-secure countries

After reviewing several studies on the cybersecurity of nations throughout the world, we found the following five to be the best:

United States

While cybercrime is a problem in the United States, it is also true that the country has the greatest infrastructure to combat it and most cybersecurity companies in the world call it home. When it comes to cybercrime, the United States is cooperative and somewhat structured in its efforts.

The Global Cybersecurity Index granted it a flawless score, although there are a few flaws. The only improvement we could mention is taking better efforts to inform the population of potential cybersecurity threats. Only 2.89 percent of mobile devices are infected with malware, and even fewer are afflicted with banking or ransomware trojans. Attacks are low across the board, propelling the United States higher in prior years' rankings.


Finland has earned a spot on our list due to its outstanding legislative response to cybercrime. It also has the lowest mobile malware infection rate, at 1.06%. There are also no harmful mailings, and targeted attacks from all angles are rare.

In general, Finland is doing an excellent job, and the government has recently allocated funding and resources to assist businesses in strengthening their cyber defenses in response to a more hazardous environment. This is an effort that we would want to see more governments officially support.

However, because every country has the chance to improve, we would want to see the government become more organized in its battle against cybercrime, both globally and locally. Powerful legislative measures and technological capabilities can only be fully exploited if the action plan prioritizes cybercrime reduction.

United Kingdom

Another high scorer and a country that has continuously been one of the finest in the world when it comes to cybersecurity, the United Kingdom comes in third place in our rankings.

Mobile malware infects a small percentage of devices (2.26 percent), banking and ransomware trojans are minimal if not nonexistent, and the United Kingdom is the source of very few cyberattacks globally. By all accounts, it has a calming effect on the global cybersecurity community.

In some ways, the United Kingdom resembles the United States in terms of its strengths and weaknesses, as while the legal framework and efforts are generally excellent, we would like to see more government efforts to educate its citizens. The best efforts in the world will be in vain if the average person allows malware in through their front door.

South Korea

The Republic of Korea, a country noted for its exceptional technical achievements in the area of computers, is one of the top countries and the leader in the Asia-Pacific region.

Why? It has a robust regulatory structure in place to combat cybercrime, and the technological capacity to do so and is typically cooperative in international efforts. It may benefit from an additional organizational effort to fully leverage its capabilities, but this does not diminish the country's good effect on global cybersecurity.

However, improvements in total infected devices can be made when compared to top scorers. Banking malware and Trojans are an issue, and malware infects around 3.19 percent of mobile devices. South Korean devices are regarded as targets, and this must be addressed regardless of how ineffectual the majority of attempts are.


Denmark rounds out our top five, which should come as no surprise. It is technologically advanced, has a solid regulatory framework in place to combat cybercrime, and is well-organized in dealing with threats and ensuring that individuals and businesses are prepared.

The infection rate of devices across the country reflects these efforts. Only 1.33 percent of mobile devices are infected, and Denmark ranks at the top in almost every infection metric.

Studies continuously show zero infected devices, be it with mobile ransomware or mobile banking trojans.

While its broad diplomatic attitude may prevent it from taking substantial steps, Denmark would benefit from a more coordinated worldwide approach to combating cybercrime. It is a worldwide problem because cyber thieves do not recognize or respect boundaries.

Honorable mentions


China may not be at the top of the list, but the Chinese government is actively working to strengthen cybersecurity.

According to them, a large-scale strategy for reorganizing the country's industry has been planned for this. As a result, the following will be developed within the framework of this program:

•  5 safety laboratories

•  3-5 national industrial security parks

•  10 demonstration sites for innovative products

•  A number of enterprises with international competitiveness in the industry

The Chinese government has predicted that by 2025, cybersecurity investment will equal 22 billion dollars each year.

The top 5 least cyber-secure countries


Algeria is still a troubled country in terms of cybersecurity. There is minimal organizational and government support for cybersecurity measures, and the country is fairly isolated in terms of joint efforts (or overall efforts are simply ineffective).

When you combine these issues with high infection rates, it's easy to see why it's ranked first. Malware-infected phones account for 21.97 percent of all phones. There is a banking virus issue as well as a crypto mining issue. Web-based malware has infected a total of 6.22 devices.

It will take time and effort to address Algeria's cybersecurity issues, and we are not seeing any progress in this regard.


Iran has not been performing well in terms of cybersecurity in previous years, and recent times have been particularly harmful to the country. Infection rates are exceptionally high, with the highest incidence of mobile malware infection worldwide (30.29 percent). 1.6 percent of consumers were targeted by banking malware, while 29.06 percent were infected by local malware. Other sorts of assaults are less common, but they continue to be a problem.

These difficulties might be addressed with patience and care, but the country's leadership is not as cooperative in international efforts as it could be, and the framework and infrastructure are not comparable to those found in the industrialized world. All of these variables combine to make it a hazardous environment for your device.


While Tanzania has made tremendous progress in addressing its cybersecurity vulnerabilities, there are still certain organizational flaws that cause problems and must be addressed.

This alone would not have qualified it for this list, but according to the most recent available statistics, it had one of the highest infection rates for devices worldwide. Although very recent data is unavailable, Tanzania formerly had a mobile infection rate of 28.03 percent and a PC infection rate of 14.7 percent.


Tajikistan, for all intents and purposes, does not have a cybersecurity apparatus of any sort. As things are, there is limited technological assistance, minimal legislative measures enforcing cybersecurity, and absolutely no cooperation measures, capacity, or progress. People are on their own when it comes to cybersecurity, and the country would be higher on this list if it weren't for the fact that other countries have more infected devices.

Despite this, there aren't many infected devices, maybe because hackers don't see the country as a key target. Despite this, 41.16 percent of computers are vulnerable to malware attacks, and further concerns loom if more gadgets enter the nation. If you are in Tajikistan, be cautious with your equipment and take precautions to protect yourself.


Pakistan has a cybersecurity concern, with 21.18 percent of PCs vulnerable to local malware attacks and 9.96 percent of mobile devices already infected. While infection rates are lower than they were a few years ago, there is still a lot of work to be done, and anyone visiting should take additional precautionary measures.

Pakistan is also a country that is typically uncooperative on an international level when it comes to dealing with cybercrime, which does not help given that it is not a technology powerhouse like some other nations with a more isolationist approach. Things are unlikely to improve in the near future.

Dishonorable mentions


Vietnam has made significant progress in terms of its cybercrime framework, but it still has one of the highest rates of infected devices in the world.

Malware infects many computers, and 9.04 percent of mobile devices. To lower the risk of infection, the government must identify remedies and act upon them.


We hope you now have a better understanding of the global cybersecurity environment and what makes one country more cyber-safe than another. Of course, it is preferable to avoid going to countries with poor defenses, but if you find yourself in one of these areas, commit to good digital practices and you should be secure no matter where you are.

Understand the risk: the best and worst countries for cybersecurity

Feb 28, 2023 — 2 min read

It’s no secret — largely thanks to Hollywood — that releasing a nuclear warhead requires a series of complicated steps, one of which is entering a launch code, which is typically a long string of letters, numbers, and other symbols. However, it’s actually a lot easier.

A chunk of trivia

In 1962, the then-President of the United States, John F. Kennedy made the announcement that, for purposes of national security, the detonation of a nuclear weapon should only be carried out after the entry of a secret code. In order to put this into action, a technology known as PAL (Permissive Action Link) was developed. The president's goal was that by implementing such a system, they would be able to prevent accidental missile launches and reduce the number of employees who are capable of carrying them out.

Despite this, a detail that is both intriguing and humorous is that during the crisis in the Caribbean, the code for firing nuclear missiles was literally eight zeros. This code did not undergo any revisions for the subsequent 17 years. Indeed, this code wasn’t even hidden; the launch instructions for each missile were printed right on them. It wasn't until the year 1977 that true security codes were mandated for use by US rocket scientists. Up to this moment, any group of individuals with even a little amount of access to nuclear weapons could launch a limitless number of nuclear missiles using a code that even an ape could figure out.

The generation of nuclear codes

The process of generating nuclear codes is complex and secure. The codes are created using a random number generator, which is a computer program that generates numbers randomly without any pattern. This code is then encrypted using highly secure cryptographic algorithms that are almost impossible to break. The encryption keys are divided into multiple parts and distributed among people known as custodians. These custodians are typically high-ranking military officials who are responsible for safeguarding the keys and ensuring that they remain secure.

Typically, the keys are created on a physical device that is purpose-built to have a very high level of security. This piece of hardware is referred to as a key-generation device, or KGD for short. The KGD is resistant to tampering and was developed to be extremely difficult to break into. This guarantees that the codes are created in a protected setting, preventing any illegal access to the information.

How nuclear codes are cooked

Feb 27, 2023 — 4 min read

We live in a digital age, and children must learn about internet safety as a first port of call. They are constantly on their phones and tablets, and many of them complete their coursework online. To secure personal information, all of these services require a password, but the passwords are frequently pre-set for youngsters, who do not get to create their own.

Children will never learn how to create secure passwords if such passwords are never changed. This renders them vulnerable to hacking. It is our responsibility as parents to educate our children about internet safety. This includes not only stopping kids from accessing improper information, but also explaining why. The greatest method for children to learn about computer security is to see adults who are skilled in the field. Continue reading to learn how to teach your children about password security fast and effortlessly.

Make unique and fun passwords

Passwords should be easy for your children to remember but tough for others to guess. That may appear to be an oxymoron, but if you make it fun, your child will be more likely to remember their passwords. Here are some easy ideas to get their creative juices flowing:

•  Make up your own sentences or words. If they had a favorite stuffed animal as a youngster, try to integrate it, but don't make it the sole word. Use three or more to create complexity.

•  Use basic, popular passwords such as ABCDE, 123455, or "password" instead. Hackers can easily breach them and obtain access to your accounts.

•  Use passwords that are at least eight characters long

•  Use numbers, uppercase letters, and symbols as needed. Also, avoid using them in apparent ways. Avoid substituting letters for vowels, such as an exclamation point (!) for I and an at symbol (@) for a. These are basic replacements that are easy to understand.

•  Create unique passwords for each website. If your password is hacked and you use it in several places, hackers will have access to your children's sensitive information in multiple areas.

Passwords should not be shared

This one may be difficult for your children to grasp. They do, after all, know your phone's password! However, it is critical that your children do not share their passwords with anyone other than their parents—including their siblings. The more people who know their password, the more likely it is that people who should not have access to their accounts will.

Explain some of the scenarios that could occur to your children to ensure that they understand why they should not share their passwords. Listed below are a few examples:

•  Someone could steal their identity

•  Someone could send hurtful messages and jeopardize friendships

•  Someone could open accounts on questionable platforms using their identity

•  Someone could change their passwords and keep them from accessing their accounts

•  If there are bank accounts attached, someone could spend their money

These are just a few examples, but they should be enough to convince your children not to share their passwords. If they do, they must inform you of who they shared it with and why. You can then decide whether or not to change their passwords.

Remember, as a parent, this does not apply to you. As a precaution, you should have all of your children's passwords who are under the age of 18. This will give you peace of mind because you will know you can monitor their online activity for their safety and security. There are many frightening people out there, and not just those looking to steal their passwords.

Avoid using the same password in multiple places

It may be difficult to keep track of so many different passwords, but it is critical that you and your child develop a unique password for each website, platform, or program. This will assist to safeguard their data:

•  If there is a data breach in one place, they simply need to be concerned about that one location

•  If you use the same password, they may have access to far more information, which might be harmful

Your child may not be able to use a password manager at school, but there are security services that can assist you in storing passwords across various platforms. They can also generate secure passwords that are difficult to decipher. These are useful tools, but you should not rely only on them for all of your passwords in case you are locked out.

What does a strong password look like?

You may be asking what makes a password strong now that you know what to do and what to avoid while teaching your children password safety. There are several approaches to constructing a secure password, and you must ensure that passwords are simple for your youngster to remember.

One method is to speak to their interests or their sense of humor.

•  Use their passions as a source of inspiration. If they enjoy magic, you may perform something like AbramagiCkadabrA#7. This is an excellent password since it includes random capitalization, a number, and a distinctive character.

•  Use something amusing for them. For example, because little children are typically delighted by potty humor, you may establish their username @uniFARTcorn3. Again, you've covered all of the possible factors for password requirements, and your kids will have a good time inputting it.

•  Make use of meals and pastimes. You might, for example, create their password Apple3picking! EAO. They enjoy apple harvesting, their favorite number, a special character, and strange apple orchard letters or abbreviations.

You want to make your password difficult to guess but easy to remember, so choosing items that will activate your memory or make you smile when your child enters it will increase the likelihood that they will remember it.

It is not suggested to keep a digital file of passwords on your computer, but if necessary, you may write them down for your children until they learn them. Just be careful not to lose track of where you wrote them!

How to teach children about password security: tips for parents

Feb 22, 2023 — 4 min read

When most individuals hear the phrase "data disposal," they get terrified. The deletion of data on one's computer or mobile device is the last thing most people desire. But, whether you are the owner of a large, medium, or small corporation, or simply a regular user, you will need to delete or replace your obsolete media at some point. After all, you must guarantee that any data contained in this medium is erased and cannot be recovered.

Nobody wants the next owner of their outdated equipment to discover their secrets, which might have serious legal or competitive consequences.

However, few people understand how to properly erase data such that it cannot be retrieved by others.

What are the different forms of data disposal?

Fortunately, there are various methods for disposing of data. Unfortunately, none of these strategies are ideal, nor can they guarantee total success. However, understanding the strategies available can assist you in selecting the one that is best for you or your business.

Delete / Reformatting

As previously stated, removing a file from an electronic device removes it from a file folder but does not delete the contents. The data is saved on the device's hard drive or memory chip.

The same holds true when you attempt to erase data by reformatting the disc. This also does not erase the data. It simply substitutes a new file system for the existing one. It's like ripping out the table of contents from an old cookbook when you really want to get rid of the cookbook itself. There are several programs available on the internet that allow nearly anybody to recover data from a drive that has just been reformatted.

Using approaches like these is a poor, uninspired, and ineffective manner of attempting data disposal.


Data wiping is the process of erasing data from an electronic medium so that it can no longer be read. Typically, data wiping is achieved by physically attaching any media to bulk wiping equipment. It may also be done internally by booting a PC from a network or a CD. It is a procedure that allows you to reuse any medium that has been erased in this manner without losing storage capacity.

Wiping data can take a long time, sometimes even an entire day for just one device. Data wiping may be valuable for an individual, but it is impracticable for a company owner who has to clean several devices.

Overwriting data

In a way, overwriting data is similar to wiping data. A series of ones and zeros are written over existing data when it is rewritten on an electrical device. Set patterns may also be employed; the pattern does not have to be random. Most of the time, one overwriting is sufficient to complete the operation. But numerous passes can be necessary if the medium has a high level of security. This makes sure that no bit shadows can be seen and that all data is entirely deleted.

A bit shadow is a piece of erased information that may still be seen under an electron microscope. It resembles writing a note on a notepad. They can take off the top sheet of paper, but what they wrote could still be legible on the page immediately below. High-security organizations are still concerned about bit-shadowing, but low-risk companies certainly don't need to worry too much. Using an electron microscope to recover data is time- and money-consuming.

Perhaps the most typical method of data destruction is overwriting. It can be time-consuming and is only effective if the media being rewritten is undamaged and still capable of receiving data writes. Additionally, it provides no security protection when overwriting. Any hard disk with complex storage management components does not support overwriting. For each piece of media that is being overwritten, you might need a license if you are overwriting a device because of legal obligations.


Erasure is another term for overwriting. Erasure should be comprehensive, destroying all data on a hard drive, and delivering a certificate of destruction demonstrating that data on an electronic device has been effectively wiped. Erasure is a terrific concept for enterprises that have acquired off-lease equipment, such as PCs, enterprise data centers, and laptops, or if you want to reuse or redeploy hard drives for storage of new contents.


Degaussing destroys computer data by disrupting the magnetic field of an electronic media with a high-powered magnet. The data is destroyed when the magnetic field is disrupted. Degaussing may swiftly and effectively erase data in a device containing a huge quantity of information or sensitive data.

However, it has two big drawbacks.

When you degauss an electrical device, its hard drive becomes unusable. Degaussing damages the hard drive's connecting circuitry. If you wish to reuse an electrical digital device such as a laptop, computer, or mobile phone, this is not the way to go about it.

Another issue is that there is no means of knowing if all of the data has been erased. You can't tell if all the data has been lost if you make the hard disk useless. In this instance, the only way to verify data destruction is to utilize an electron microscope. However, unless you are destroying high-security information, this method of verifying is both costly and unworkable.

The density of a hard disk can also affect degaussing. As technology advances and hard drives get larger and more powerful, degaussing may no longer be as effective as it once was.

Physical destruction

Many people want to recycle their old equipment but are hesitant because of the information it may hold. These folks frequently take out the hard disk with a hammer and crush it to pieces.

Surprisingly, physical destruction is also a cost-effective method for organizations and corporations of all kinds to remove data. One of the most advantageous aspects of physical destruction is that it provides an organization with the highest possibility that data has been physically deleted.

However, it may be costly, and because it entails the destruction of electronic media, the capital cost is also considerable. It might also be a concern if an organization has a green and sustainable recycling program for obsolete electronic media.

Physical destruction is a type of degaussing. Incineration is another option, although it is less prevalent since it needs destruction to take place away from human areas.


Properly disposing of sensitive information is an essential component of information security. By taking the time to identify what data needs to be disposed of, selecting the right methods for disposal, and having a secure and controlled plan in place, organizations can ensure that sensitive information is protected and kept out of the wrong hands.

How to properly dispose of sensitive information

Feb 20, 2023 — 6 min read

This question is indeed controversial, sparking a heated debate in all camps. Regardless of who is right, according to an IBM report from last year, the average data breach is set to cost more than $4.35 million.

That is why, now, more than ever, programmers must be aware of the risks associated with various programming languages and take precautions to protect their code from vulnerabilities. The good news is that known best practices can assist programmers in safeguarding their code against data leaks and attacks.

Continue reading to learn more about programming language vulnerabilities and how to future-proof your code.


Python is a programming language that is widely used because of its user-friendliness and legibility. On the other hand, it’s also one of the most vulnerable languages due to its popularity and the number of libraries available. According to the results of a recent study, more than 46 percent of all Python code contains at least one security issue.

The following are some of the most significant Python risk factors:

Vulnerable libraries
One of the most serious risks associated with Python lies in its libraries. When a new library is released, it may contain flaws that can be exploited by attackers.

Python code frequently relies on third-party components, which can introduce additional risks. A security breach could occur if one of these dependencies is compromised.

Best practices for Python include:

The use of a virtual environment
A virtual environment is a separate development environment that can help to reduce the risk of dependency issues. Install all dependencies in the virtual environment rather than in a global one when using a virtual environment.

Perform software composition analysis (SCA)
The process of identifying and analyzing dependencies in code is known as SCA. Performing SCA, for example, with Kiuwan allows you to identify and mitigate code security risks quickly.


Because of its ease of use and wide range of available libraries, PHP can be an excellent choice for web development. As a result of its popularity and the number of web applications built with it, it is extremely vulnerable.

The following are some of the most significant PHP risk factors:

SQL injection
SQL injection is one of the most common attacks against PHP applications. By injecting malicious code into a SQL query, attackers can execute malicious code.

Remote code execution
Remote code execution is another common attack against PHP applications. This attack enables attackers to run code on the server, potentially compromising the entire system.

Best practices for PHP include:

Validating user input
It is critical to validate all user input to ensure no malicious code is present. This will assist in preventing SQL injection and remote code execution attacks.

Use prepared statements
By separating data from code, prepared statements can help protect against SQL injection attacks. Even if an attacker is able to inject malicious code, it will not be executed.


Java has long been a popular choice for corporate development because of its platform neutrality with a vast range of accessible libraries. Regardless, Java is susceptible because of the enormous amount of legacy applications.

The following are some of the most significant Java risk factors:

Outdated versions
Many Java applications are built on out-of-date platform versions. As newer versions frequently include security fixes for known vulnerabilities, this can leave them open to attack.

Insecure libraries
There are certain additional dangers while using Java applications because they frequently use third-party libraries. A security breach may occur if any of these libraries are hacked.

Best practices for Java include:

Use a dependency manager
The utilization of third-party libraries can be made safer with the assistance of a dependency manager.

Utilize strong encryption techniques
For any sensitive data being kept or sent, strong encryption should be employed. This will assist in preventing attackers from gaining access to this data, even if they are able to hack the system.

Ruby on Rails

Ruby on Rails is a well-known web development framework that is lauded for how simple it is to implement. Unfortunately, Ruby on Rails is insecure by default and contains harmful functionalities, making it susceptible to attack.

The following are some of the most significant Ruby on Rails risk factors:

Dangerous functions
Some Ruby on Rails functions, such as "eval" and "exec," might be harmful if used incorrectly. If these functions are not appropriately protected, an attacker might use them to execute malicious code on the server.

Unsecured defaults
Many Ruby on Rails settings are insecure, such as the "secret key base" and "session cookie store." If they are not properly set, they may result in data security breaches.

Best practices for Ruby on Rails include:

Disabling dangerous functions
It’s essential to turn off any potentially hazardous features that are not required. Because of this, attackers won't be able to utilize them to carry out the actions required to execute malicious code.

Utilize security best practices
When setting up Ruby on Rails, it is essential to adhere to all of the recommended security best practices. This includes the use of strong passwords and encryption for any data that may be considered sensitive.


C was deemed to be the most vulnerable programming language in a recent report. This was owing to the number of significant vulnerabilities that are frequently detected in programs that are based on C.

The following are some of the most significant C risk factors:

Memory corruption
Memory corruptions are possible in C, which opens the door for malicious code to be run on the system and allows hackers to get access.

Buffer overflows
Buffer overflows are a sort of software security issue that is widespread in C. They arise when more data than a buffer can handle is pushed to it, letting attackers overwrite other sections of memory and execute code.

Best practices for C include:

Static application security testing (SAST)
SAST can assist in identifying security flaws in C-based applications. It may provide thorough testing and be integrated into the software development life cycle.

Use a security-focused coding standard
Several coding standards focus on security, such as the CERT C Secure Coding Standard.  Adherence to these standards can assist to decrease the risk of vulnerabilities in C-based programs.


JavaScript, like practically every other programming language, has a range of security flaws. Exploiting JavaScript’s vulnerabilities allows you to change data, redirect sessions, modify and steal data, in addition to a variety of other things. While JavaScript is often considered a client-side program, security flaws with JavaScript can cause difficulties in server-side contexts as well.

The following are some of the most significant JavaScript risk factors:

Source Code Vulnerabilities
Source code flaws are frequently paired with other JavaScript security issues, even side by side. The increasing usage of publicly accessible packages and libraries is another source of source code security flaws. Furthermore, developers frequently install packages for even the most basic of operations, therefore increasing project dependencies. Of course, this can lead to security issues and other far-reaching implications.

Session data theft
Client-side browser scripts may be quite powerful since they have access to all of the material sent to the browser by a web application. This includes cookies that may include sensitive data, such as user session IDs. In reality, a popular XSS attack technique is to provide the attacker with the user's session ID tokens so that the attacker may hijack the session.

Best practices for JavaScript include:

Quality auditing through tools
While monitoring and resolving all potential application dependency vulnerabilities can be time-consuming and challenging, auditing tools can assist in automating and therefore speeding up the process.

Set secure cookies
Set your cookies to "secure," which restricts the usage of your application's cookies to just secure web sites, to guarantee that SSL/HTTPS is in use.


Even though weaknesses in security are frequently shared across many computer languages, certain languages are more susceptible to attacks than others. If they are not set up or utilized appropriately, any one of the top five programming languages is left open to the possibility of being attacked. As a result, it is essential to follow the best practices for each language in order to assist in lowering the hazards.

Which is the most secure programming language?

Feb 6, 2023 — 4 min read

We have made enormous leaps forward in terms of technology over the past decade. However, the growth of cyberspace brings with it new challenges for cybersecurity; cybercriminals have adapted their techniques to the new environment. Nevertheless, there is a solution to every challenge.

In light of this, let's take a look at some of the most serious cybersecurity threats and the solutions that have been offered for them in 2023.

The biggest threats to cybersecurity today and how to combat them

Adaptation to a remote workforce

Employees encounter one of the most common security threats when working from home. Employees may mistakenly let hackers access their computers or corporate files due to inattention, weariness, or ignorance. However, protecting remote and hybrid working environments will remain the most difficult tasks in the world of cyber security.

Cloud-based cybersecurity solutions that safeguard the user's identity, devices, and the cloud are essential for secure remote working.

Blockchain and cryptocurrency attacks

Attacks on blockchain-based systems can be launched by both outsiders and insiders. Many of these assaults use well-known tactics such as phishing, social engineering, data-in-transit attacks, and those that focus on coding faults.

To defend organizations against cyberattacks, stronger technological infrastructure may be constructed using blockchain-powered cybersecurity controls and standards. Combining the blockchain with other cutting-edge technologies like AI, IoT, and machine learning may also be required.

Ransomware development

Ransomware is a type of virus that encrypts files on a victim's computer until a ransom is paid. Historically, organizations could keep their data fairly safe by using a standard backup procedure. The organization may be able to restore the data held hostage without paying the ransom, but this does not guarantee that the bad guys will not try to take over the data.

As a result, users must prioritize frequently backing up their devices, employing cutting-edge anti-malware and anti-phishing solutions, and keeping them up to date at all times

BYOD policies

Personal devices are more likely to be used to breach company networks, whether or not BYOD is permitted by IT, because they are less secure and more likely to contain security weaknesses than corporate devices. As a result, businesses of all sizes must understand and address BYOD security.

Among the management options are BYOD services, and the process begins with enrollment software that adds a device to the network. Company-owned devices can be configured individually or in bulk.

The dangers involved with serverless apps

For some developers, the event-driven nature of serverless computing and the lack of permanent states are drawbacks. Developers that need persistent data may encounter problems since the values of local variables may not survive between instantiations.

Enlisting the support of your company's cybersecurity expertise may be the best line of action for those who use serverless architectures.

Supply chain attacks are increasing

An attack on the supply chain happens when someone breaches your digital infrastructure by leveraging an external supplier or partner who has access to your data and systems. This type of attack is known as a supply chain assault.

Upkeep and maintenance of a highly secure build infrastructure, fast software security upgrades, and the creation of safe software updates as part of the software development life cycle are all essential.

Preventive social engineering measures

Cybercriminals use social engineering to get critical information from their targets by influencing their psychology. It causes users to make security mistakes and steal sensitive information such as banking passwords, login information, system access, and other similar information.

To avoid cyberattacks, organizations should employ a technology-and-training-based strategy. There is no one-size-fits-all solution to defeating these social engineers; instead, you must adopt an integrated approach that includes multi-factor authentication, email gateways, respected antivirus software, staff training, and other components to thwart such social engineering assaults.

Cyber security challenges in different industries

Cybersecurity issues are common anywhere cyberspace is used. Some significant industries that face specific cybersecurity challenges in business are listed below.

Vehicular communications

As Vehicle-to-Everything (V2X) communication technologies evolve and current cars are able to interface with external infrastructure, the necessity of securing communications becomes increasingly apparent. There is a very real possibility that the vehicles of today may be the targets of cyberattacks that are directed at vehicular communications.

Cybersecurity challenges in the healthcare industry

Cybercriminals continue to develop new methods to attack healthcare cybersecurity policies, whether it be high-value patient data or a low tolerance for downtime that might interfere with patient care. Both of these vulnerabilities present opportunities for cybercriminals. Hackers now have access to a market worth $13.2 billion thanks to the 55% rise in cyberattacks on healthcare providers that have occurred over the past several years. This has turned the healthcare industry into a veritable gold mine.


Threats are constantly evolving and the cybersecurity landscape is constantly changing. With huge sums of money and the potential for significant economic shocks at stake in the banking and financial business, the stakes are high in this area. A significant hacking assault on banks and other financial institutions might result in severe economic consequences.

Online retailing

Retailers present a favorable and low-risk target environment for those who commit cybercrime. These businesses are responsible for the processing, storage, and protection of the data and sensitive information of their customers. This information may include financial credentials, usernames, and passwords. These details are susceptible to being attacked because of the ease with which they might be utilized in both online and offline operations.


Recent years have demonstrated how the key cyber security issues and threat actors are adapting their techniques to a changing global environment. The greatest strategy to safeguard your organization and plan for cybersecurity in 2023 is to be proactive. A single data breach can cost millions of dollars in lost data, penalties, and regulatory action. Understanding the hazards that are on the horizon will allow you to account for them in your procedures and stay one step ahead of attackers.

The most serious cybersecurity threats and solutions in 2023

Jan 12, 2023 — 5 min read

Of course you want to keep your data safe. So why are so many security precautions frequently overlooked? Many accounts, for example, are protected by weak passwords, making it easy for hackers to do their work. There is a fine line between selecting a password that no one can guess and selecting a password that is easy to remember. As a result, we will examine this topic in depth today and ensure that you no longer need to click on the "lost password" link.

What exactly is a strong password?

So let's begin with a definition. A secure password is one that cannot be guessed or broken by an intruder.

Computers are utilized by hackers in order to try out various combinations of letters, numbers, and symbols. Passwords that are only a few characters long and consist entirely of letters and digits are easy for modern computers to crack in a couple of seconds. Because of this, it is vital to utilize robust combinations of capital and lowercase letters, numbers, and special characters in one password. There is a minimum length requirement of 12 characters for passwords, although using a longer password is strongly encouraged.

To summarize the attributes of a secure password, they are as follows:

•  At least 12 characters are required. The more complicated your password, the better.

•  Upper and lower case letters, numbers, and special characters are included. Such passwords are more difficult to crack.

•  Does not contain keyboard paths

•  It is not based on your personal information

•  Each of your accounts has its own password

You have undoubtedly observed that a variety of websites "care" about the security level of your password. When you are making an account, you will frequently see tooltips that remind you to include a particular amount of characters, as well as numbers and letters. Weak passwords have a far higher chance of being disapproved by the system. Keep in mind that, for reasons related to your security, you should never use the same password for several accounts.

A secure password should be unique

You may use a strong password for all of your accounts after you've created one. However, doing so will leave you more exposed to assaults. If a hacker obtains your password, they will be able to access whatever account you used it for, including email, social media, and work accounts.

According to surveys, many people use the same password because it is easier to remember. Don't worry, there are several tools available to assist you with managing multiple passwords. We'll get to them later.

While adding special characters in passwords is an excellent approach to increase their security, not all accounts accept all characters. However, in most scenarios, the following are used: ! " #% & *, / : | $ ; ': _? ().

Here are some examples of strong passwords that make use of special characters:

•  P7j12$# eBT1cL@Kfg

•  $j2kr^ALpr!Kf#ZjnGb#

Ideas for creating a strong password

Fortunately, there are several methods for creating unique and secure passwords for each of your accounts. Let's go over each one in detail:

1. Use a password generator/password manager

If you don't have the time to come up with secure passwords, a password generator that can also serve as a manager is a very simple and straightforward solution that you may use.

2. Choose a phrase, not a word

Passwords are significantly less secure than passphrases since they are often lengthier and more difficult to guess or crack. Instead of a word, pick a phrase and use the first letters, digits, and punctuation from that phrase to generate an apparently random combination of characters. Experiment with different wording and punctuation.

Here are some examples of how the passphrases  technique may be used to generate secure passwords:

•  I first went to Disneyland when I was four years old and it made me happy: I1stw2DLwIw8yrs&immJ

•  My friend Matt ate six donuts at a bakery cafe and it cost him £10: MfMa6d@tbc&ich£10

3. Pick a more unique option

Open a dictionary or book and select a random word, or better yet, many. Combine them with numbers and symbols to make it far more difficult for a hacker to decipher.

As an example:

•  Sand, fork, smoke, okay — Sand%fork9smoke/okay37

4. Experiment with phrases and quotes

If you need a password that is difficult for others to guess but easy for you to remember, try variants on a phrase or statement that means something to you. Simply choose a memorable sentence and replace parts of the letters with numbers and symbols.

For example:

•  “For the first time in forever”: Disney’s Frozen: 4da1stTymein4eva-Frozen

5. Make use of emojis

You may always use emoticons to add symbols to your passwords without making them difficult to remember. You can't add emojis, but you can attempt emoticons made out of punctuation marks, characters, and/or numbers.

For example:

•  \_(ツ)_/¯

•  (>^_^)> <(^_^<)

•  (~.~) (o_O)

What should I do after I have created a password?

1. Set passwords for specific accounts

You'll still need to generate a unique password for each of your accounts once you've created a strong password that you can remember. Instead of creating several new ones, you may include the name of the platform you use at the end. For example, if your password was nHd3#pHAuFP8, just add the word EMa1l to the end of your email address to get nHd3#pHAuFP8EMa1l.

2. Make your password a part of your muscle memory

If you want to be able to recall your password, typing it out several times can help you do so. You will be able to memorize information far more easily as a result of the muscle memory that you will develop.

How to keep your passwords safe?

1. Choose a good password manager

Use a trustworthy password manager whether you're setting your own safe passwords or looking for an internet service to handle it for you. It creates, saves, and manages all of your passwords in a single safe online account. All you have to do is put all your account passwords in the application and then safeguard them with one "master password". This means you just have to remember a single strong password.

2. Use two-factor authentication

You've heard it before, but we'll say it again. Two-factor authentication (2FA) adds an additional level of protection. Even if someone steals your password, you can prevent them from accessing your account. This is often a one-time code supplied to you by text message or other means. Receiving an SMS, by the way, is not the most secure method since a hacker might obtain your mobile phone number in a SIM swap fraud and gain access to your verification code.

Apps using two-factor authentication are far more secure. Google Authenticator, for example, or Microsoft Authenticator.

3. Passwords should not be saved on your phone, tablet, or computer

Although it might not be immediately visible, this is a common approach for people to save their passwords. That should not be done. Your files, emails, messenger conversations, and notes may all be hacked.

4. Keep your password confidential

Even if you completely trust the person to whom you are handing your password, sending it in a text message or email is risky. Even if you speak it aloud or write it down on paper, someone who is interested can overhear you and take notes behind you.

How to create a secure password

Jan 10, 2023 — 4 min read

Ransomware assaults are something that all of us have been keeping an eye on for some time. According to the most recent findings, over 21 percent of companies throughout the world were victims of ransomware attacks in 2022. 43% of these had a substantial influence on the way in which their business activities were carried out.

It’s true that cybercrime is on the rise, and those who commit these crimes are going after both individuals and businesses. In order to maintain a competitive advantage, it is essential to have a solid understanding of the types of cyber threats that will be prevalent in 2023.

The purpose of this article is to familiarize you with the most important developments in the field of cybersecurity that are expected to take place in 2023. There are a lot of different things to keep an eye on here, from emerging malware to security solutions based on artificial intelligence. In this section, we will discuss the potential effects of these trends on the future of cybersecurity and the steps you can take to better defend yourself.

1. The Internet of Things (IoT) and cloud security

It's critical to stay up to date on the newest cybersecurity developments in an ever-changing technological context. As more firms utilize cloud computing and Internet of Things (IoT) technology, the importance of adequate security measures grows.

When it comes to IoT and cloud security, it is critical to recognize the particular dangers that these technologies entail. One of the most serious concerns about IoT devices, for example, is that they are frequently "always on," leaving them exposed to external assaults. Similarly, if security mechanisms are not adequately established, cloud services might be accessible to hackers.

It is critical to have robust security procedures for your IoT devices and cloud services in order to keep your organization secure. This includes adopting strong passwords on all devices, enabling multi-factor authentication for access control, and ensuring that any data saved in the cloud is encrypted.

As businesses and consumers rely more on cloud computing and software solutions, the requirement for effective security becomes even more critical. When compared to traditional on-premises solutions, SaaS security solutions provide rapid scale-up or scale-out based on demand and cost savings. These solutions are also well suited for working with remote or dispersed teams where several business components may be located all over the world.

Data protection, identity and access management, web application firewalls, and mobile device security are all available through Security as a Service (SECaaS) solutions. They also provide managed services, which allow customers to delegate the monitoring and maintenance of their cloud security systems to qualified specialists. This helps guard against dangers like malware and ransomware while also keeping businesses up to date on the newest security developments.

3. Increased security for remote and hybrid employees

As the world continues to migrate to remote and hybrid work arrangements, cybersecurity must change to meet these new needs. Organizations must safeguard their systems and train their staff with cyberthreat defenses as their dependence on technology and access to sensitive data grows.

Multi-factor authentication (MFA), which requires multiple authentication stages to validate a user's identity before giving access to systems or data, is one security protocol that organizations should consider using. MFA can offer an extra degree of security against attackers who use stolen credentials to gain access to accounts.

Businesses should also consider adopting rules and processes to ensure the security of their workers' devices. This may involve offering safe antivirus software and encrypted virtual private networks (VPNs) for remote connectivity to employees. Employees must also be trained on the significance of using strong and unique passwords for each account, alongside the risks of connecting to public networks.

4. Machine learning and artificial intelligence

Artificial intelligence and machine learning have grown in popularity in the realm of cybersecurity in recent years. AI and machine learning (ML) offer automated threat detection and enhanced security processes, making them effective instruments in the battle against cyberattacks. Organizations may employ AI and machine learning to proactively detect and avoid dangers as these technologies evolve.

AI and machine learning can assist in the rapid and accurate analysis of vast volumes of data, enabling more effective threat identification and prevention. For example, AI may detect harmful or suspicious network activities, such as increased traffic from a certain source or trends in user behavior. Organizations can also use machine learning algorithms to identify abnormalities and prioritize warnings that may signal a possible breach.

Furthermore, AI and machine learning can automate key cybersecurity operations like patch management, malware detection, and compliance checks. Organizations can save time and money that would otherwise be spent on manual processes. Furthermore, the application of AI and machine learning may assist businesses in lowering the risk of false positives and ensuring that only the most critical security incidents are highlighted.

5. Creating a Safe Culture

Businesses in today's environment must cultivate a culture of safety. Security cannot be handled after the fact or as a one-time job. It should be the organization's fundamental value, ingrained in all parts of its operations. This implies that everyone in the business must be informed of current cybersecurity trends and understand how to secure their data.

Employee training and checks and balances should be part of a safe culture. All personnel should be trained in the fundamentals of Internet security, as well as how to utilize systems and software safely. Policies, systems, and processes should be evaluated on a regular basis to ensure they are in compliance with the most up-to-date security guidelines.


As technology advances, cybersecurity risks and patterns will alter. Businesses must keep ahead of the curve by monitoring emerging trends and updating their security measures as needed. Organizations can secure their data and networks from intruders by staying up to date with the newest 5 cybersecurity trends in 2023.

Organizations may maintain the security of their data by keeping with the times on trends and implementing the required safeguards. Furthermore, they should work to educate their personnel on the need to adhere to best practices in cybersecurity. This will aid in the creation of a secure environment and reduce the likelihood of hacking.

5 key cybersecurity trends to watch in 2023

Jan 10, 2023 — 4 min read

The film industry in general isn't recognized for its commitment to truth, and Hollywood's depiction of biometric technology is no exception. The use of technologies such as fingerprint scanners, face recognition software, and iris recognition technology has become increasingly frequent in a variety of films to portray dramatic and high-tech images of the future.

Let's take a more in-depth look at the way biometrics are portrayed in movies, and what of what we see there is science fiction and what is a reality that most people probably know very little about.

Biometrics in Hollywood blockbusters

First, we ought to define biometrics and how biometric characteristics may be used to identify people. Biometrics refers to the identification of a person utilizing a character's unique physical and behavioral features. Each individual has some quantitative and fixed markers that do not vary over time or alter very minimally. These signs are so distinct that they may identify one individual from another.

In addition to the well-known DNA, fingerprints, and face, unique biometric characteristics include the pupil/iris of the eye, palm print, hand print, scent, "pattern" of veins on the fingers and palm, and so on.

Many biometric parameters of a person may be used by modern technology for identifying people, but they vary in cost, speed, and accuracy of usage. Biometric technologies are often used to control access to important objects or to identify criminals. These aspects are well-represented in films, including, of course, Bond movies.


In one of the Bond films — "Skyfall", a security camera in the London Underground is used to search for an individual’s face.

The film shows how the biometric identification system scans and validates faces with security cameras before recommending the "best fit" solutions. Bond was readily located since his face was uncovered, he was facing the crowd, and the camera easily recognized him. However, the situation was more complex while looking for an intruder among the crowd – in a hat pulled practically over the eyes, it is nearly impossible to recognize a person. To calculate its algorithm, the system must "see" the entire face (which includes data such as the distance between the eyes, the distance from the eyes to the lips, etc.). The technology recognizes the intruder when he raises his head and the camera "sees" his eyes.

It should be underlined that this is not only possible, but it already works in reality.

Demolition Man

The amputation of body parts (from one person) to identify and get access to top-secret things by another person or to collect information is the next iteration of biometrics that is frequently exploited in movies. The film "Demolition Man" is one example of an eye being removed and used.

In reality, this doesn't work. Because the majority of today's technologies are created with a "live" identification mechanism (pulse, reflexes, temperature, humidity coefficient, etc.), it is not possible to identify a dead portion of the body using these methods. Those who use fingerprint readers in their day-to-day lives can attest to the fact that the performance of the gadget is significantly diminished during the winter months because the fingers freeze.

In addition to the built-in mechanism that was just described, there is also a biological limitation: a severed finger is considered "invalid" after approximately ten minutes; an amputated eyeball decomposes rather quickly, and the pupil spreads out, making it unsuitable for use as a unique identifier; the eyeball also decomposes rather quickly.

Minority Report

Developing the topic of biometric authentication with the help of the eyes, it is worth noting that an eye transplant procedure is a common approach in filmmaking for changing identity and gaining access to something. The film "Minority Report" is one such example.

Eye surgeons are unlikely to transplant an entire eyeball, owing to the inutility of such a procedure. For the eye to operate, the optic nerve must also work, which cannot be "stitched on" (much as a brain transplant cannot be performed), at least not yet. An eyeball transplant procedure is theoretically conceivable, but this eye will be unable to see, which is why nothing like this is done. We can only guess whether such an eye may be utilized for biometric identification.

Back to the Future 2

One of the most prophetic and reliable films in the field of biometric technology was "Back to the Future 2"

The video depicts the active usage of biometric technology multiple times. To begin, this is the identification of a person using fingerprints (instead of, say, a passport). Remember how the cops fingerprinted Jennifer Parker, who was sedated by Doc prior to "arriving" in 2015? Secondly, the officers used the same fingerprint to enter Jennifer's Hill Dale home. Thirdly, payment for products and services was using biometrics rather than credit cards: elderly Biff pays for a cab by merely putting his finger on a biometric sensor.


In each of these three counts, the authors have made excellent points. You are required to leave your biometric data in order to receive a visa to enter the United States, the European Union, and some other countries. These biometric data might be in the form of fingerprints or retinal scans. Of course, not all US residents have had their fingerprints taken yet.

In addition, payments made using a customer's fingerprint have already started to become more commonplace in the banking industry. The widespread Apple Pay service is a good illustration of this point. To validate the transaction, all that is required of you is to scan your fingerprint by pressing a single button that is located on the front of your smartphone. In newer models, you simply have to scan your face.

Last but not least, a number of firms have already introduced door locks that can be opened using a fingerprint. One of Samsung's many business divisions focuses on "Smart Home" goods, one of which is the production of electronic door locks.

Science fiction from films is clearly becoming a reality; certainly, the imagination and ingenuity displayed by writers and filmmakers may be what pushes scientists to research and bring that vision into reality.

Biometric technology has a bright future. This confirms that the most tempestuous and impossible visions of filmmakers in the early 2000s or the 1980s are not the future; rather, they are becoming normal in everyday life.

Biometrics in Hollywood movies: fantasy or reality?

Jan 9, 2023 — 5 min read

Since the time of the Roman Empire, people have been able to use encryption to keep their communications private. When the Roman emperor Gaius Julius Caesar was penning an important message, he would sometimes replace a letter in the source text with another letter that was positioned three characters to the left or right of the original letter in the alphabet. This practice dates back to well before our period. If the communication was intercepted by his adversaries, they would not be able to decipher it since they would think it was written in some other language. This method of concealment was known as the Caesar cipher, and it was categorized alongside the other substitution ciphers. The substitution ciphers' overarching strategy is to change the meaning of a character by using a different character.

However, in encrypted messages, common terms were replaced by a single letter, eliminating the possibility of substitution. In this manner, Mary Stuart, imprisoned in Sheffield Castle, communicated with Anthony Babington about the conspiracy and Elizabeth's death. This is a part of that letter.

Indeed, Elizabeth's counterintelligence department, commanded by Francis Walsingham, intercepted the letter, which was quickly decrypted by Elizabeth's greatest cryptanalyst, Thomas Fellipes. How did he manage it? Through an analysis of frequencies.

All letters appear in the language with varying frequency. As a result, you just need to define the percentage of characters in the text that will be replaced by a certain character, and it will take some time to substitute and test hypotheses. This is called frequency analysis. It only works on somewhat long texts, and the longer the text, the better.

Anthony Babington was hung, drawn and quartered, Mary Stuart was beheaded, and the process of letter replacement was no longer deemed secure. However, an antidote to frequency analysis was discovered immediately. It is sufficient to utilize several encryption methods: encrypt one string with one, and the other with another, and frequency analysis will be rendered ineffective.

Since then, there has been an ongoing race between encryption and cipher cracking.

The cracking of the Enigma cipher machine used by Nazi Germany to safeguard military and political communications is the most notable feat in breaking encryption algorithms. By the standards of the time, it was a superb encryption device, on which the brightest brains in Germany collaborated. But deciphering the encryption required no less of a force: a team of British cryptographers collaborated with the young scientist Alan Turing.

Despite the cloak of secrecy, his name is linked to the selection of the key that could unlock the Enigma. Indeed, the key was a seemingly mundane Hitler greeting, which had to be included at the conclusion of every piece of correspondence. Alan Turing accomplished the impossible by providing his country with a crucial advantage during World War II.

Modern algorithms like AES, Twofish, and Blowfish differ significantly from substitution or the displacement of letters, as well as Enigma ciphers. Furthermore, they have nothing to do with them and are immune to brute-force and frequency analysis attacks. One thing stays constant, however: there are still individuals who wish to hack them and decipher encrypted messages. Nowadays, the availability of such a dependable data protection instrument cannot help but bother those who wish to acquire access to any information of special services.

Methods of attacks on ciphers by intelligence agencies

Today, intelligence agencies use three primary methods to attack ciphers.

Direct key selection to ciphers

Data centers that use brute force to break encrypted data are being created for this purpose. You can crack practically any contemporary encryption by brute force; simply guess the key (which is generally logical: if there is any key, in theory, sooner or later it can be guessed). The only question is how much power you have and how much time you have. For example, whereas a single contemporary computer can check 10,000 keys per second on average, a data center of thousands of machines may match tens of millions of keys per second.

Fortunately, cracking a powerful cipher can take more than a dozen years in a contemporary data center, and it is impossible to envision what has to be done so that a whole data center is engaged in cracking your encrypted data. After all, a single day in a data center costs tens of thousands of dollars. Because of the expense of resources, a basic password selection using a dictionary is generally done.

This was the situation with Daniel Dantas, a Brazilian banker who was detained in Rio de Janeiro in July 2008 on accusations of financial fraud. Five hard discs with encrypted data were discovered during a search of his flat. Local specialists were unable to break them and went to the FBI for assistance. The FBI returned the CDs after a year of futile attempts. The method of picking a password using a dictionary was utilized for hacking. Daniel Dantas devised a strong password that would be immune to dictionary assaults. It is unclear whether this aided him in court, but access to his encrypted data was never acquired. He utilized TrueCrypt, an encryption application, by the way.

Aside from data centers, there is an ongoing development of a quantum computer that has the potential to drastically revolutionize modern cryptography. If cryptographers' forecasts come true, it will be easy to crack any current crypto container very fast following the advent of such a supercomputer. Some scientists believe that such a supercomputer has already been developed and is located someplace in the hidden cellars of the US National Security Agency.

The second attack method is a scientific study of modern encryption algorithms with the aim of breaking them

A lot of money is being invested in this business, and such decisions are truly invaluable for special services and intelligence. Here, researchers compete with intelligence agencies. If researchers break the protocol or discover a weakness early on, the rest of the world is likely to learn about it and switch to more secure methods. If they are discovered by intelligence agencies, they are discreetly utilized to obtain access to encrypted data.

A 768-bit RSA key was regarded as an entirely reliable solution ten years ago, and it was utilized by private users, huge corporations, and governments. However, a consortium of engineers from Japan, Switzerland, the Netherlands, and the United States successfully computed data encrypted using a 768-bit RSA cryptographic key at the end of 2009. The usage of 1024-bit RSA keys was suggested. However, 1024-bit RSA keys are no longer deemed strong enough either.

The third attack method is a collaboration with device, program, and encryption algorithm creators to weaken encryption and add backdoors

It is sufficiently difficult for special services to decrypt a correctly encrypted crypto container, so instead, they try to bargain with firms producing encryption tools so that the latter leaves decryption flaws or degrades the algorithms utilized. The US’ NSA is ahead of the rest of the world in this regard. According to Edward Snowden's allegations, the American creator of cryptographic technology RSA Security was paid $ 10 million by the NSA to build a backdoor into its software. RSA Security provided its clients with the notoriously flawed Dual EC DRBG pseudo-random number generation technique for this money. Because of this flaw, US spy services were able to readily decode communications and information.

We don't know what additional backdoors exist in encryption algorithms today, but we do know that decrypting information is one of intelligence services' top goals. High-level professionals are continually working on it, and governments are pouring money into it. It is well known that the majority of efforts are focused on cracking SSL protocols, 4G security technologies, and VPNs.

The history of encryption. Confrontation of encryption and intelligence agencies.

Dec 16, 2022 — 4 min read

The creation of a file made out of human DNA that is capable of retaining terabytes of information is a very real future for scientists.

To this day, humanity has produced around 10 trillion gigabytes of data, and on a daily basis, people generate emails, photographs, films, and other information that add up to another 2.5 million gigabytes. A significant portion of this information is kept in exabyte data centers, which have the footprint of several football fields and have an annual operating cost of one billion dollars. However, researchers have developed an alternate strategy, which consists of a section of DNA that is able to store vast quantities of information in a compact shape.

According to Mark Bath, a professor of biology at the Massachusetts Institute of Technology, you could hypothetically put all of the data in the world into a coffee cup full of DNA.

The DNA molecule is an ideal storage device for digital data

"We need innovative methods to store the massive volumes of data that are growing throughout the world," says Mark Bath. "DNA is a thousand times denser than any flash drive, and it also has the fascinating virtue of not using energy. Anything may be written into DNA and stored indefinitely " he continues.

Text, images, and any other type of information are all encoded as a series of zeros and ones when saved to digital storage devices. The same information may be encoded in DNA using the four nucleotides that make up the genetic code, which is designated by the letters A, T, G, and C. For instance, the numbers 0 and 1 can be represented by the letters G and C, respectively.

DNA possesses various characteristics that make it a good information carrier:

•  DNA is very stable

•  DNA is relatively simple to synthesize and sequence

•  DNA is highly dense, each nucleotide corresponding to two bits is around 1 cubic nanometer. An exabyte can fit in the palm of your hand.

However, there is a drawback. The expense of producing such enormous amounts of DNA is huge. Recording one petabyte of data (1 million gigabytes) now costs $1 trillion. According to Bath, the cost of synthesis needs to be decreased by around six orders of magnitude before creating archives based on a biological polymer becomes economical. According to the expert, this is entirely feasible in 10-20 years.

Another difficulty is obtaining the needed file.

"What happens if technology advances to the point where it is economically feasible to write an exabyte or zettabyte of data into DNA? You'll have a pile of DNA containing millions of photographs, texts, videos, programs, and other data, and you'll need to locate a certain file: how will you accomplish it?" Bath inquires.

It's like looking for a needle in a haystack.

How are files encoded?

At this time, the PCR is the most common method for obtaining DNA files (polymerase chain reaction). Each file contains a sequence that is designed to bind to a particular PCR primer (a primer is a short piece of nucleic acid). Each primer is introduced to the sample individually in order to locate the necessary sequence in order to extract a particular file. However, one of the drawbacks of using this method is that it increases the likelihood of a phenomenon known as crosstalk occurring between the primer and the DNA sequences, which can lead to the loss of some files. In addition, the synthesis process of PCR calls for enzymes and results in the loss of a considerable amount of DNA. You sort of have to burn a haystack to locate a needle.

The problem was solved by Professor Bath and his colleagues when they encapsulated each file in a silica particle measuring 6 micrometers and included a brief DNA sequence that indicated what was contained within the file. The researchers were able to retrieve individual photos that were saved as DNA sequences from a batch of 20 files by using this method, which resulted in an accuracy rate of one hundred percent. It is conceivable to scale up to a sextillion files given the number of potential labels that may be utilized. By the way, a sextillion is a number that consists of one and 20 zeros following it.

Hack DNA to find the right file

The team at MIT devised a novel extraction approach by isolating each file in a silica particle as an alternate option. Each such "capsule" is labeled with a single string of "barcodes" relating to the file's contents, such as "cat", "airplane", and so on. The researchers encoded 20 distinct pictures into DNA segments around 3,000 nucleotides long, which is comparable to about 100 bytes, to show their method in a cost-effective manner. (They also demonstrated that data as large as a gigabit might fit within the capsules).

When the researchers sought to extract a specific image, they deleted the DNA sample and replaced it with primers that matched the labels they were seeking — "cat", "red", and "wild" for a tiger shot, or "cat", "orange", and "domestic" for a domestic cat photo. The primers are then tagged with fluorescent or magnetic particles, making it simple to extract and identify any files while leaving the remainder of the DNA intact for eventual storage. This strategy is comparable to looking for terms on Google.

"So far, the search speed is one kilobyte per second. The size of the data per capsule determines the search speed of our file system. It is also worth mentioning that the speed is constrained by the prohibitively high cost of writing even 100 gigabytes of data per DNA, as well as the number of sorters that may be used concurrently.

"If DNA synthesis gets cheap enough, we can optimize the quantity of data stored", said scientist James Banal.

The researchers created their barcodes using single-stranded DNA sequences from a library of 100,000 sequences, each around 25 nucleotides long, established by Stephen Elledge, a genetics and medicine professor at Harvard Medical School. If you place two of these labels on each file, you may label each one uniquely.

Final words

While DNA may not be extensively employed as a data carrier for some time, there is currently a large need for low-cost, high-volume storage solutions.

The DNA encapsulation approach can be effective for archiving data that is only sometimes accessed. As a result, Professor Bath's laboratory is already hard at work on the formation of a business called Cache DNA, which will provide a method for the long-term storage of information in DNA.

How soon will we be able to store files in our DNA?

Dec 8, 2022 — 4 min read

The most frequently-used password globally is "123456”. However, analyzing passwords by country can yield some quite fascinating results.

We frequently choose weak passwords such as "123456" since they are easy to remember and input. The differences between such passwords can sometimes be found in the language itself. For example, if the English have "password" at the top of their list, the Germans prefer "passwort", and the French use "azerty" instead of "qwerty" due to the peculiarities of the French keyboard layout, which has the letter A instead of the usual Q.

When a weak password is driven by culture, things get much more intriguing. The password "Juventus" is likely to appeal to fans of the Italian football team Juventus. This password is also the fourth most popular option among Italian Internet users. The club is from Turin, Piedmont, and is supported by about 9 million people. At first look, the unique password "Anathema" appears to be a typical occurrence in Turkey, where the British band Anathema's name is among the top ten most common passwords.

A weak password is widespread

ExpressVPN together with Pollfish interviewed 1,000 customers about their password preferences in order to learn more about how individuals approach password formation.

Here are some of their findings:

•  The typical internet-goer uses the same password for six different websites and/or platforms

•  Relatives are likely to be able to guess their passwords from internet accounts, according to 43% of respondents

•  When generating passwords, two out of every five people utilize different variants of their first and/or last name

These findings demonstrate a lack of cybersecurity knowledge, despite the fact that 81% of respondents feel confident in the security and privacy of their existing passwords.

According to the survey results, passwords frequently contain personal information. Below, you will find the most shared personal information with the percentage of respondents who revealed that their passwords contained personal information.

•  First Name (42.3%)

•  Surname (40%)

•  Middle Name (31.6%)

•  Date of birth (43.9%)

•  Social security number (30.3%)

•  Phone number (32.2%)

•  Pet name (43.8%)

•  Child's name (37.5%)

•  Ex-partner's name (26.1%)

The most common passwords in various countries

Based on an infographic from ExpressVPN, the picture below illustrates the most often used passwords in various nations, practically all of which are in the top ten in their respective countries. Many are exclusive to these nations and demonstrate how cultural influences impact password creation.

Much of the information presented comes from a third-party study of stolen credentials (which were made public by Github user Ata Hakç). These datasets are based on the language of the individual sites, allowing the information to be distributed by country.

Let's have a look at some interesting variations of passwords. For instance, the phrase "I love you forever" may be deciphered from the password "5201314," which is commonly used by people from Hong Kong. In contrast, users in Croatia make use of the password “Dinamo”, which is derived from the name of an illustrious football team based in Zagreb. Martin is the password that is used by people from Slovakia. In Slovakia, the name Martin has a position as the fourth most common name. The Greeks, on the other hand, chose not to put undue effort into themselves and instead went with the most straightforward password out of the list, which was 212121. On the other hand, Ukrainians use the pretty difficult password Pov1mLy727. Apart from Ukraine, there are other countries where users more often than not create strong passwords. Let’s take a look.

These 10 countries create the strongest passwords

According to the results of the National Privacy Test that was carried out by NordVPN, the greatest marks were obtained by Italians in regard to their understanding of robust passwords. The following is a list of the top ten nations in which people come up with the most complicated passwords.

1. Italy 94.3 (points out of 100)

2. Switzerland 94

3. Spain 93.5

4. Germany 93.3

5. France 92.3

6. Denmark 91.8

7. UK 90.7

8. Belgium 90.4

9. Canada 89.4

10. USA 89.3

The top 10 did not include Australia (88.9), South Africa (86.2), Saudi Arabia (85.7), Russia (81.4), Brazil (81.2), Turkey (73.9), and India (78.4).

"This study demonstrates that individuals from all around the world are aware of how to generate secure passwords. The information is there, but people aren't using it in the right ways," says Chad Hammond, a security specialist at NordPass.

Also in November 2022, NordPass published a study that found out which passwords network users use most often. According to the findings of the survey, the majority of individuals still rely on simple passwords such as their own names, the names of their favorite sports teams or foods, simple numerical combinations, and other straightforward options.

NordPass security specialist Chad Hammond also stated, "Using unique passwords is really crucial, and it's scary that so many individuals still don't." It is critical to generate distinct passwords for each account. "We put all accounts with the same password in danger when we reuse passwords: in the case of a data breach, one account at risk can compromise the others."To summarize, it is reasonable to state that it does not matter where you were born, where you live, or what you are passionate about; you must always use unique passwords. We recommend that you make your password difficult to guess by making it more complicated or by using a password generator. This will increase the level of security provided by your password. In addition to this, we strongly suggest that you take advantage of two-factor authentication wherever it is an option. If you add an additional layer of protection to your accounts, be it in the form of an app, biometrics, or a physical security key, you will notice a significant increase in their level of security.

How passwords differ around the world

Dec 6, 2022 — 4 min read

The truth is, the answer isn’t as straightforward as you might think. A ‘hacker’ is a name that can be ascribed to many different types of individuals — from North Korean crypto bridge drainers to a jealous 16-year-old trying to get into his girlfriend’s Facebook account. That’s why it’s important to understand exactly what a ‘real’ hacker is and what kinds of assaults may be carried out.

As a result of the controversy that surrounds the concept of hacking, hackers frequently get labeled as criminals. The process of obtaining and providing information or data is known as "hacking," and while there are numerous and less severe forms of hacking, "security hacking" is the most common type of hacking. Hacking is, in fact, an interesting component of computer operations that involves obtaining and presenting certain information or data.

The definition "individuals who utilize their knowledge or competence in computer operations to obtain access to systems or defeat Internet security barriers" describes the sort of hacking that falls under this category. "Gaining access" is the fundamental aspect of hacking. Some hackers do it for the thrill of it, while others do it for financial benefit. Some are even driven by political motivations.

Types of security hackers

Black Hat

The average hacker in the headlines and the greatest threat to your company is motivated by monetary gain. Their purpose is to enter your company and steal bank information, private data, and money. The stolen resources are utilized for extortion, illicit market sales, or personal benefit.

White Hat

These hackers are the antithesis of "black hat hackers," since they want to assist companies and support them in their cyber protection efforts either pro bono or in exchange for payment. A firm or an individual that assists with the protection of your organization — in other words. Cyberkite is analogous to a hacker who wears a white hat; they defend the data of your company.

Gray Hat

Personal pleasure drives these hackers. They are aware of everything that white and black hackers are aware of, and they are uninterested in attacking or safeguarding you. Usually, they merely have a good time breaking down fortifications for a test. They seldom do anything damaging, so they cut and go on. They constitute the vast majority of all hackers.

Blue Hat

This hacker is spiteful and hostile. They don't exist unless you make them. As a result, it is worthwhile to follow business ethics and treat consumers and other parties fairly. Because who knows, if you're not playing fair, you enrage them, and one of them turns into a hacker with a blue hat. They frequently modify off-the-shelf attack programs to suit their needs. They then utilize this code to exact vengeance on a company or individual.

Red Hat

Crusaders in cyberspace. They are vigilante superheroes who also serve as judges, juries, and executioners. Their mission is to eradicate black hat hackers from the internet. They employ a slew of black-hat cyberweapons against them. However, you are unaware of their existence since they resemble well-known comic book superheroes. The upside to your business is that they, like white hat hackers, try to defend you.

Green Hat

Inexperienced hackers. They are yet to become full-fledged hackers. They put programming to the test in order to learn. They normally do not assault businesses and instead learn from experienced hackers in internet groups. They don’t usually pose a hazard to your business.

Script Kiddie  

These guys are not like the rest. Of course, they sound like harmless hackers, but their purpose is to cause as much devastation and destruction as possible. They have no desire to steal. They concentrate on scripting and coding but do not create their own software. DoS (denial of service) or DDoS (distributed denial of service) attacks are widespread on their side. As a result, they’ll utilize any sort of assault that might create havoc within your firm, harm your reputation, or result in client loss.

The country with the highest number of hackers

With definitions out of the way, you can be sure of the kinds of hackers we’re talking about. Indeed, China is home to the world's highest number of hackers per capita. It is possible to fall into the trap of believing that everything is predicated just on the size of China's population, which is enormous. However, not everything is as it seems at first glance. The hacker networks or organizations that China employs are among the most advanced and sophisticated in the world. The People's Liberation Army of China (PLAC) backs some of these groups financially and logistically.

Also, in order to achieve domination over other nations in cyberspace, China is encouraging cybersecurity as a culture. This will ensure that its educated youth have an excellent level of cyber literacy. This has also resulted in a rise in the amount of cybercrime. Various estimates suggest that China is responsible for 41% of all cyber assaults that occur throughout the world.

The idea of "network warfare" in Chinese information operations and information warfare is approximately equivalent to the American concept of cyber warfare. According to Foreign Policy magazine, China's "hacker army" numbers between 50,000 and 100,000 members, in addition to other groups and individuals. Chinese hackers might be described as "patient dreamers and social engineers." Asia, the Pacific, and Australia are their favorite locations.

Chinese hackers' typical attacks

A common Chinese hack employs a viral SMS message including a link to gather or install keystroke monitoring software in search of bank account access. It is worth noting that the majority of China's cybercrime infrastructure is based outside the nation, owing to strict government rules. Another factor to consider is that, over the last 20 years, China has swiftly absorbed and overtaken Western nations in the latest technology — for example, the city of Shenzhen is regarded as the world's electronics capital. Furthermore, China's objective cannot be dismissed as a desire to acquire the intellectual property for use in both the business and public sectors. The other is the urge to spy on one's own citizens and those of other nations — yeah, that's right, the surveillance program includes, for example, eavesdropping on Americans online, according to an April 2021 Human Rights Watch report. Will the government take a more active role in combating and preventing cybercrime? Only time will tell.

What country has the most hackers per capita?

Nov 30, 2022 — 4 min read

In contrast to other forms of verification, such as passwords or tokens, biometric authentication relies on an individual's distinct biological traits to confirm their identity. Indeed, it’s harder to fake and is typically more user-friendly since users do not have to memorize passwords or carry about a physical token that may easily be lost or stolen. Additionally, it is more difficult to counterfeit. An essential component of identification is the authenticator.

Analysis of a person's speech may be used for identity verification using a process known as voice recognition, which is sometimes referred to as speech recognition or voice authentication. Airways and soft tissue cavities, in addition to the shape of the mouth and the movement of the jaw, all have an effect on speech patterns and help create a person's distinctive "vocal print."

There’s a kind of speech recognition technology available known as speaker recognition. It’s not the same as voice recognition, which is a technique that is utilized in applications that convert speech to text and in virtual assistants such as Siri and Alexa. Although speech recognition can comprehend spoken words, it cannot verify a speaker's identity based on the speaker's vocal characteristics; however, voice biometrics can.

Methods for recognizing the speaker

There are primarily two methods that may be used for voice authentication:

  • Text independent
    Any spoken passphrase or other types of speech material may be used to achieve voice authentication
  • Text-dependent
    In both the registration process and the verification process, you will use identical passphrases. This implies that the speaker will be asked to repeat a sentence that has already been decided upon, rather than being allowed to say anything that they would want to affirm. When using static text voice authentication, the password that is used for one verification is utilized for all of the verifications. The user is provided with a passphrase that is completely random, such as a series of numbers, through dynamic text-based voice authentication. Additionally, registration is required for this content.

Registration and confirmation of identity

It is necessary to capture the biometric speech sample and then register it with the microphone in order to generate a reference template that can be used for comparison with samples during subsequent authentication attempts. After that, distinctive aspects of the vocal performance are observed, such as:

  • Duration
  • Intensity
  • Dynamics
  • Innings

Examples of voice authentication

The hands-free mobile authentication use case is the most common use for voice authentication. This kind of identification is perfect for use on mobile phones or in other situations where other types of biometric verification, such as face recognition, fingerprint recognition, or iris recognition, are impractical. in automobiles.

Voice authentication may also be beneficial for voice recognition devices like Amazon Alexa and Google Home. There has been a recent uptick in the usage of virtual assistants to carry out activities such as placing orders and doing other tasks that would traditionally demand some kind of verification.

During help desk conversations, speaker recognition may also serve as an authenticator for callers. When compared to supplying personal information to verify identification, such as a driver's license or credit card number, users may discover that this method is not only more secure but also more convenient.

Advantages of voice recognition

Low operational costs

Voice authentication may result in cost savings for call centers as well as financial institutions. They are able to save millions of dollars because of the fact that this technology does away with many of the stages required by conventional verification procedures. During an end-to-end conversation, it is able to validate the customer's identification just by recognizing their voice, eliminating the need for the routine questions that are often asked.

Improved quality of life for the end customer

Voice biometric systems provide a number of benefits, one of which is that they have the potential to significantly enhance the customer experience. However, this potential is sometimes overlooked. It is no longer necessary for callers to provide passcodes, PINs, or answers to challenge questions in order to have their identities verified.

Because of this, speech biometrics are ideal for omnichannel and multichannel deployments. Once a client has been registered, their voiceprint may be utilized across all of a company's support channels, making speech biometrics suitable for omnichannel and multichannel deployments.

Increased accuracy

Voice authentication is more reliable and accurate than using passwords, which are simple to forget, change, or guess. Passwords are also easier to compromise. It's kind of like how fingerprints are the only thing that can identify you. To put it another way, in contrast to passwords, a voice is impossible to forget or imitate. In spite of the fact that the sound might be influenced by a number of factors, it is much more dependable and handy.

Technology that is simple to put into action

The ease of use and implementation that speech recognition biometrics provide is very valuable to a lot of different companies. It may be difficult to implement some forms of biometric technology inside an organization and to get started with these systems. However, due to the fact that speech biometric systems need so little, it is often possible to install them without the need for extra hardware or software.

Because this technology is so easy to use, businesses often have the ability to redeploy employees to other areas of the organization in order to improve both their efficiency and the level of pleasure they provide to their customers.


Voice authentication is an excellent method for verifying a user's identity since it offers extra levels of security, which manual passcodes may not be able to give. Voice authentication is a wonderful approach to verifying a user's identity. Voice authentication is advantageous for both the company and its consumers since it eliminates the annoyance that is associated with laborious login procedures.

The technologies behind voice recognition

Nov 24, 2022 — 4 min read

There is no good reason, from a technical standpoint, why passwords can't contain scripts in Chinese, Japanese, Korean, or any other language for that matter. If you are able to write in this script, then it is entirely appropriate for you to employ it in whatever endeavors you undertake.

However, if you put this theory to the test, you will discover that many websites, including well-known ones like Google, prevent you from entering a password that contains characters other than A-Z, 0-9, and common special characters.

This brings to mind the early days of the internet when certain websites forbade the use of capitalization and prohibited the use of Latin letters for no discernible reason.

Site issues with passwords including Chinese characters

Users often make use of passwords that are longer than 30 characters, include all of the various character kinds that are usually suggested, and are created at random. If you use a password manager, you should probably make the password as difficult and as lengthy as it can possibly be.

However, if you visit more than 150 websites and change your password each time, you may find that many websites have password rules that do nothing but lower their level of security rather than increase it. This is because these rules are designed to protect users from themselves.

For instance, several websites impose arbitrary restrictions on the maximum length of passwords. They will typically demand passwords with less than 20 characters, in many instances. In certain cases, you can only use a maximum of 12 characters.

Even though it makes the password less secure, certain websites require that you include a number and a special character. This is despite the fact that doing so decreases the entropy of the password. On other pages, one may be restricted to using just the Latin letters; numerals and punctuation are not allowed. On certain websites, one may use punctuation, but you have to choose it from a drop-down menu first, and characters like "&" are not permitted.

This last point ought to give you significant cause for worry. Are these websites capable of sanitizing the password before inserting it into the database? Your database should not be used to store passwords in any way. I'm curious how many times this has been the cause when we consider severe breaches of privacy. You are required to hash the password before saving it.

In any event, the end effect of all of this is that a significant number of websites still verify passwords in an erroneous manner, excluding characters that really should be fully allowed. There is no valid reason why "您未设置安保问题" can’t serve as your password.

So, how safe is such a password?

Entropy is a term used to describe both the difficulty of breaking a password and the complexity of the password itself. In the next paragraphs, we will examine how to compute the entropy of a password.

If we expand the character set to cover everything from a to Z, digits from 0 to 9, punctuation marks, and so on, then we have a pool of 90 characters. This results in an entropy per character of log2(90), which is equivalent to 6.49 bits. If, on the other hand, we expand our character pool to include all Chinese, Japanese, and Korean (CJK) characters (presuming that our character pool has 74,605 characters), then we can calculate the entropy of each character as log2 (74605) = 16.19 bits of entropy per character.

Therefore, a 7-character CJK password such as "正确的马电池钉" would give you 16.19 bits of entropy times 7, which equals 113.33 bits total. I would need a password consisting of 18 characters if I wanted to match this using Latin letters, numbers, and special characters.

The vast majority of people are Chinese-illiterate. They have decided against using any characters that include CJK in their passwords. On the other hand, the effectiveness of a complicated password is comparable to that of vaccination in that it confers herd immunity. Crackers will only conduct brute force or dictionary attacks based on the letter az if individuals only use passwords that include those letters. If people have a habit of using numbers and punctuation, it forces attackers to incorporate those elements into their vocabulary, which in turn slows down their attack. The attacker needs to try all of these additional possible combinations, regardless of whether or not your own password used any of them.

Because roughly one-third of the world's population is able to read and write CJK characters (the populations of China and Japan are enormous), if we permit people to use CJK characters in their passwords, then even if I don't use CJK characters myself, we can all benefit from the increased complexity that this provides.

To reiterate, knowledge of Chinese is not required in order to work with CJK characters. You can keep track of all of your passwords by using a password manager, as was previously suggested. It does not matter whether you are unable to read or write the password as long as the password manager is able to save it and accurately copy and paste it into the password box when it is required.


We’d like to remind everyone that your name, birth date, or any other identifying information should never be used as a password, regardless of the language you use.

In addition, the passwords that are established on other websites might somewhat vary from one another, which makes them easier to remember and prevents the same issue from occurring. In this scenario, it is essential to connect your mobile phone number or email address so that you may easily recover the account in the event that the mobile phone number is lost or stolen.

On the other hand, many people feel that passwords are becoming outdated and that there are now more efficient methods to handle computer security and authentication than by using passwords. Perhaps now is the moment for people to begin shifting their attention to other approaches. In the not-too-distant future, we will find out.

How secure is a password that uses Chinese characters?

Nov 23, 2022 — 1 min read

In the new version of Passwork, we have completely redesigned the System settings. They are now divided into three sections:

  1. Global — organization settings that determine the operations of most of the Passwork functions
  2. Default — the values of the settings that will be used if no other custom settings are specified
  3. Custom — settings that can be set for individual users and roles

Now you can set up different interface languages, configure authorization methods, and enable mandatory two-factor authentication for individual users and roles.

To do this, click "Create a new settings group" in Сustom settings, add users or roles and select your desired settings. The newly created group will be added to the top of the list and will get the highest priority.

The following settings are now available:

  • Ability to create organization vaults and private vaults
  • Ability to create links to passwords
  • Mandatory 2FA
  • Time of automatic logout when inactive
  • Authorization method (by local password, LDAP password or SSO)
  • API usage
  • Interface language

We're already working to add new settings.

If you are already using Passwork — update your Passwork
Or request a free demo at

Introducing Custom settings

Nov 10, 2022 — 5 min read

Multi-factor authentication (often known as MFA for short), refers to the process of confirming the identity of a user who is attempting to log in to a website, application, or another type of resource using more than one piece of information. Indeed, multi-factor authentication is the difference between entering a password to gain access to a resource and entering a password plus a one-time password (OTP), or a password plus the answer to a security question. Another example of multi-factor authentication is entering a password plus the answer to a security question.

Multi-factor authentication provides greater assurance that individuals are who they claim to be by requiring them to confirm their identity in more than one way. This, in turn, reduces the risk of unauthorised access to sensitive data. Multi-factor authentication requires individuals to confirm their identity in more than one way. After all, entering a stolen password to get access is one thing; it is quite another to enter a stolen password and then be needed to additionally input an OTP that was sent to the smartphone of the real user.

Multi-factor authentication can be achieved through the use of any combination of two or more factors. Two-factor authentication is another name for the practice of using only two factors to verify a user's identity.

How Does MFA work?

MFA is effective because it necessitates the collection of extra verification information (factors). One-time passwords are one of the multi-factor authentication mechanisms that consumers encounter most frequently (OTP). OTPs are the four-digit to eight-digit codes that you frequently receive through email, SMS, or a mobile application of some kind. When using OTPs, a fresh code will be created at predetermined intervals or whenever an authentication request is sent in. The code is created based on a seed value that is assigned to the user when they first register and some other component, which might simply be a counter that is incremented or a time value. This seed value is used in conjunction with some other factor to generate the code.

The three categories of multi-factor authentication methods

Generally speaking, a technique of multi-factor authentication will fall into one of these three categories:

•  Something you are familiar with: a PIN, password, or the solution to a security question

•  Something you own: an OTP, a token, a trusted device, a smart card, or a badge

•  Something you are, such as your face, fingerprint, retinal scan, or other biometric information

Methods of multi-factor authentication

In order to accomplish multi-factor authentication, you will need to utilise at least one of the following methods in addition to a password.


A method of verification that depends on a piece of hardware or software being able to recognize biometric data, such as a person's fingerprint, facial characteristics, or the retina or iris of their eye.

Push to approve

A notice is shown on someone's smartphone that prompts the user to tap their screen in order to accept or deny a request for access to their device.

One-time password (OTP)

A collection of characters that are created automatically and are used to authenticate a user for a single login session or transaction only.


A method for sending a One-Time Password (OTP) to the user's smartphone or other devices.

Hardware token

A compact, portable OTP-generating device that is sometimes referred to as a key fob.

Software token

A token that does not exist in the form of a physical token but rather as a software program that can be downloaded onto a smartphone or other device.

The advantages of multi-factor authentication

Enhancing the level of safety

Authentication that takes into account many factors is more secure. After all, when there is only one mechanism defending a point of access, such as a password, all a malicious actor needs to do to get admission is figure out a means to guess or steal that password. This is the only thing that needs to be done in order to acquire access. However, if admittance additionally needs a second (or perhaps a second and a third) element of authentication, then it becomes far more difficult to obtain access, particularly if the requirement is for something that is more difficult to guess or steal, such as a biometric characteristic.

Providing support for various digital initiatives

Multi-factor authentication is a key enabler in today's business world, where more companies are keen to deploy remote workforces, more customers want to purchase online rather than in shops, and more companies are migrating apps and other resources to the cloud. In this day and age, it can be difficult to ensure the safety of organisational and e-commerce resources. Multi-factor authentication can be an extremely useful tool for assisting in the protection of online interactions and financial transactions.

Are there any disadvantages to multi-factor authentication?

It is feasible to establish a less easy-to-access environment while building a more secure one — and this might be a disadvantage (this is especially true as zero trust, which sees everything as a possible threat, including the network and any apps or services running on it, gains acceptance as a safe access basis). No employee wants to spend additional time each day dealing with several impediments to getting on and accessing resources, and no consumer wants to be slowed down by multiple authentication procedures. The objective is to strike a balance between security and convenience so that access is secure but not so onerous that it causes excessive hardship for those who legitimately require it.

The role of risk-based authentication in multi-factor authentication

One technique to achieve a balance between security and convenience is to increase or decrease authentication requirements based on the risk associated with an access request. This is what risk-based authentication entails. The risk might be associated with either what is being accessed or who is requesting access.

The risk presented by what is accessed

For example, if someone seeks digital access to a bank account, is it to initiate a money transfer or simply to verify the status of an existing transfer? Or, if someone interacts with an online shopping website or app, is it to place an order or to monitor the progress of an existing purchase? For the latter, a username and password may be sufficient, but multi-factor authentication makes sense when a high-value item is at stake.

The risk is presented by the person requesting access

When a remote employee or contractor seeks access to the corporate network from the same city, on the same laptop, day after day, there's little reason to assume it's not that person. But what happens when a request from Mary in Minneapolis arrives from Moscow unexpectedly one morning? A request for extra authentication is warranted due to the possible danger – is it really her?

The future of Multi-Factor Authentication: AI, Machine Learning and more

Multi-factor authentication is always improving to provide enterprises with access that is both more secure and less unpleasant for individuals. Biometrics is an excellent example of this concept. It's more secure, since stealing a fingerprint or a face is difficult, and it's more convenient because the user doesn't have to remember anything (such as a password) or make any other substantial effort. The following are some of the current advancements in multi-factor authentication.

Machine learning (ML) and artificial intelligence (AI)

AI and ML may be used to identify characteristics that indicate if a particular access request is "normal" and as such, does not require extra authentication (or, conversely, to recognize anomalous behaviour that does warrant it).

Online Quick Identity (FIDO)

The FIDO Alliance's free and open standards serve as the foundation for FIDO authentication. It facilitates the replacement of password logins with safe and quick login experiences across websites and applications.

Authentication without a password

Rather than utilising a password as the primary means of identity verification and complementing it with alternative non-password methods, passwordless authentication does away with passwords entirely.

Be certain that multi-factor authentication will continue to evolve and develop in the pursuit of methods for individuals to show they are who they say they are — reliably and without having to jump through an endless number of hoops.

What exactly is multi-factor authentication (MFA) and how does it work?

Nov 10, 2022 — 4 min read

It's possible that you've become familiar with the term "time-based one-time passwords" (TOTP) in relation to "two-factor authentication" (FA) or "multi-factor authentication" (MFA).

However, do you really understand TOTP and how they work?

The Meaning of TOTP

"Time-Based One-Time Passwords” refer to passwords that are only valid for 30-90 seconds after they have been formed with a shared secret value and the current time on the system.

Passwords are almost always composed of six-digit sequences that are changed every thirty seconds. On the other hand, some implementations of TOTP make use of four-digit codes that become invalid after a period of 90 seconds.

An open standard is used in the TOTP algorithm, and this standard is detailed in RFC 6238.

What is a shared secret?

TOTP authentication uses a shared secret in the form of a secret key that is shared between the client and the server.

To the naked eye, the Shared Secret seems to be a string with a representation in Base32 that is similar to the following:


Computers are able to comprehend and make sense of information even if it is not legible by humans in the manner in which it is presented.

The client and the server both have a copy of the shared secret safely stored on their respective systems after a single transmission of the secret.

If an adversary is able to discover the value of the shared secret, then they will be able to construct their own unique one-time passcodes that are legitimate. Because of this, every implementation of TOTP needs to pay particular attention to securely storing the shared secret in a safe manner.

What is system time?

There is a clock that is integrated into every computer and mobile phone that measures what is referred to as Unix time.

Unix time is measured in terms of the number of seconds that have passed since January 1, 1970, at 00:00:00 UTC.

Unix time appears to be nothing more than a string of numbers:


This small number, however, is excellent for the generation of an OTP since the majority of electrical devices using Unix time clocks are sufficiently synced with one another.

Implementations of the TOTP Authentication Protocol

The use of passwords is not recommended. However, you may increase security by combining a traditional password with a time-sensitive one-time password (TOTP). This combination is known as two-factor authentication or 2FA, and it may be used to authenticate your accounts, virtual private networks (VPNs), and apps securely.

TOTP can be implemented in hardware and software tokens:

•  The TOTP hardware token is a physical keychain that displays the current code on a small screen

•  The TOTP soft token is a mobile application that displays a code on a phone’s screen

It makes no difference whether you use software tokens or hardware tokens. The purpose of using two different forms of authentication is to increase the level of protection afforded to your online accounts. You have access to a one-time password generator that you may use during two-factor authentication to obtain access to your account. This generator is available to you regardless of whether you have a key fob or a smartphone with an authentication app.

How does a time-based one-time password work?

The value of the shared secret is included in the generation of each time-based one-time password (TOTP), which is dependent on the current time.

To produce a one-time password, the TOTP method takes into account both the current Unix time and the shared secret value.

The counter in the HMAC-based one-time password (HOTP) method is swapped out for the value of the current time in the time-based one-time password algorithm, which is a version of the HOTP algorithm.

The one-time password (TOTP) technique is based on a hash function that, given an input of indeterminate length, generates a short character string of fixed length. This explanation avoids getting too bogged down in technical language. If you simply have the result of a hash function, you will not be able to recreate the original parameters that were used to generate it. This is one of the hash function's strengths.

It is essential to keep in mind that TOTP offers a higher level of security than HOTP. Every 30 seconds, a brand new password is produced while using TOTP. When using HOTP, a new password is not created until after the previous one has been entered and used. The fact that the one-time password for HOTP continues to work even after it has been used for authentication leaves hackers with a significant window of opportunity to mount a successful assault.

Authentication using Multiple Factors (MFA)

A user must first register their TOTP token in any multi-factor authentication (MFA) system that supports a time-based one-time password before they can use the device to connect to their account.

Some TOTP soft tokens need the registration of a different OTP generator for each account. This effectively implies that if you add two accounts to your authenticator app, the program will produce two temporary passwords, one for each account, every 30 seconds. A single TOTP soft token (authenticator program) may support an infinite number of one-time password generators. Individual one-time password generators safeguard the security of all other accounts in the case where the security of an account is compromised.

To use 2FA, a secret must be created and shared between the TOTP token and the security system. The security system's secret must then be passed to the token.

How is the shared secret sent to the token?

Typically, the security system creates a QR code and requests that the user scan it using an authenticator app.

A QR code of this type is a visual depiction of a lengthy string of letters. The shared secret is, roughly speaking, part of this lengthy sequence.

The software will string the image and extract the secret when the user scans the QR code using the authenticator app. The authenticator program may now utilize the shared secret to generate one-time passwords.

When registering a TOTP token, the secret is only sent once. Many of the concerns about stealing the private key are alleviated. An adversary can still steal the secret, but they must first physically steal the token.

It works even when you're not connected to the internet!

To use the TOTP technique, you do not need an active internet connection on your smartphone or a physical key.

The TOTP token only needs to obtain the shared secret value once. The security system and the OTP generator may thus produce successive password values without needing to communicate. As a consequence, time-based one-time passwords (TOTP) operate even when the computer is turned off.

All about Time-Based One-Time Passwords (TOTP)

Oct 27, 2022 — 5 min read

Facial recognition is a technology-based method of identifying a human face. Such a recognition system maps facial characteristics from an image or video using biometrics. To identify a match, it compares the information gained to a database of known faces. Facial recognition may aid in the verification of a person's identification, but it also presents privacy concerns.

The facial recognition industry is predicted to expand from $4 billion in 2017 to $7.7 billion in 2022. This is due to the fact that such technology holds several business uses including monitoring and marketing.

But here's where things become difficult. If you value your privacy, you undoubtedly want some say over how your personal information (your data) is utilised. The truth is, your "faceprint" is your personal information.

How does facial recognition work?

You might be adept at identifying people's faces. You probably have no trouble recognizing the face of a family member, friend, or acquaintance. You recognize their facial characteristics — their eyes, nose, and mouth and their facial movements.

That is exactly how a face recognition system operates but on a much larger, computational scale. Recognition technology sees data where you see a face. That information may be saved and retrieved. According to Georgetown University research, half of all American adults have their photos recorded in one or more facial-recognition databases that law enforcement authorities may consult should they wish to.

So, how does facial recognition really work? Although certain technologies differ, most follow a standard procedure:

•  A photograph or video of your face is obtained. Your face might be scanned alone or in a crowd. Your photo might show you gazing straight ahead or almost in a profile view.

•  The geometry of your face is scanned by facial recognition software. The distance between your eyes and the distance from your forehead to your chin are important considerations. The program recognizes facial landmarks — one system even recognizes 68 of them – which are all important in differentiating your face. As a consequence, your facial signature is created.

•  A database of known faces is matched to your facial signature, which is a mathematical formula. Consider the following: At least 117 million people in the United States have photos of their faces in one or more police databases. The FBI has access to 412 million of such pictures for searches, according to a May 2018 report.

•  A decision is made. Your faceprint could match one in a database bringing back a positive result.

How effective is facial recognition?

Experts are concerned that face recognition might result in incorrect identifications. What if a police agency wrongly identifies someone smashing a shop window during a riot as someone who was nowhere near the incident using facial recognition technology? How probable is it that such an incident will occur?

It depends. According to the National Institute of Standards and Technology tests, the top face recognition algorithm has an error rate of under 0.08% as of April 2020. This is a significant improvement from 2014 when the best algorithm on the market had an error rate of 4.1%.

According to a 2020 report by the Centre for Strategic & International Studies (CSI), accuracy is greater when identification algorithms are used to match persons to clear, static photos, such as passport photos and mugshots. When applied in this manner, face recognition algorithms achieved up to 99.97% accuracy on the National Institute of Standards and Technology's Facial Recognition Vendor Test.

In practice, however, accuracy rates are often lower. According to the CSI report, the Facial Recognition Vendor Test discovered that the mistake rate for one algorithm increased from 0.1% when faces were matched to high-quality mugshots to 9.3% when faces were matched against images of people caught in public. When individuals were not looking straight at the camera or were partly concealed by shadows or objects, error rates increased.

Another issue is ageing. According to the Facial Recognition Vendor Test, middle-tier facial recognition algorithms exhibited mistake rates that increased by roughly a factor of ten when attempting to match photographs of participants shot 18 years earlier.

Who employs facial recognition?

Many individuals and organisations utilise face recognition in a variety of settings. Here are a few examples:

Airport administration

In airports, facial recognition technologies can monitor persons entering and exiting. The technology has been utilised by the Department of Homeland Security to identify persons who have overstayed their visas or are under criminal investigation.

Product manufacturers of mobile phones

Apple originally employed facial recognition to unlock the iPhone X, and since, the technology has been carried over to all subsequent models. Face ID authenticates — it ensures that you are who you say you are when you access your phone. According to Apple, the likelihood of a random face unlocking your phone is one in one million.

Websites for social networking businesses

When you post a picture to Facebook, an algorithm is used to detect faces. If you wish to tag others in your images, the social media firm will ask you. If you answer yes, a connection to their profiles is created. Facial recognition on Facebook is 98 percent accurate.

Entrance businesses and restricted zones

Some businesses have abandoned security badges in favour of facial recognition technologies.

Religious congregations at places of worship

Face recognition has been used by churches to scan their congregations to see who is there. It's a fantastic method to keep track of regulars and irregulars, as well as to adapt contribution requests.

Campaign marketers and advertisers

When targeting groups for a product or concept, marketers often consider factors such as gender, age, and ethnicity. Even during a performance, facial recognition may be used to determine such audiences.

The use of facial recognition in police enforcement

Today, facial recognition databases play an important role in law enforcement. According to an Electronic Frontier Foundation investigation, law enforcement agencies frequently collect mugshots from jailed people and compare them to local, state, and federal face recognition databases.

Law enforcement organisations may use these mugshot databases to identify persons in images collected from a number of sources, including closed-circuit television cameras, traffic cameras, social media, and photos taken by police officers themselves.

According to the Electronic Frontier Foundation, police officers may also use their mobile phones, tablets, or other devices to take images of cars or pedestrians and instantaneously match their photos to the faces in one or more facial recognition databases.

In addition, police enforcement has utilised face recognition to identify persons who may be sought in connection with crimes at huge events such as concerts, sports events, or the Olympics.

Several face recognition technologies are available to the federal authorities. Its primary database, however, is the FBI's Next Generation Identification system. This collection comprises over 30 million images.


Opponents of face recognition systems argue that although they give some protection, it is not enough to outweigh a feeling of independence and freedom. Many people believe that the usage of these technologies violates their privacy, but their worries don't stop there. They also emphasise the dangers of identity theft. Even face recognition companies recognize that as the technology becomes more widely used, the probability of identity theft or fraud increases.

As with many emerging technologies, the enormous promise of facial recognition has its downsides, but manufacturers are working to improve the usability and accuracy of their systems every day.

All about facial recognition

Oct 20, 2022 — 4 min read

Over the last several years, Chinese smartphones have gained a very lousy reputation when it comes to privacy, owing to a variety of factors including a lack of customer trust and the fact that global political events have not been particularly kind to China. China's worldwide image improved significantly in the mid-2010s, owing mostly to China's entry into the smartphone market and developments in 4G and 5G technology.

The market for smartphones is now one of the most rapidly developing areas of the technology sector worldwide. The number of mobile devices sold around the globe has skyrocketed from 100 million in 2007 to over 1.5 billion, which saw the advent of the smartphone revolution. Because smartphones are the most frequent way of connecting to the internet, companies that operate in this sector are vital to the development of the technology sector.

We saw the original Apple iPhone debut 14 years ago in 2007, which surely signaled the beginning of a new era of information. We've seen huge players like Samsung join the market throughout the years, and more lately, Chinese competitors like Huawei and Xiaomi have been eating up worldwide market share with their low-cost handsets. Moreover, Oppo and Vivo, which have a tiny but consistent market share and are even gaining popularity in the United States, should not be overlooked.

Apple has never been as successful in China as it is elsewhere, owing to the country's preference for domestic produce and local brand loyalty. Having said that, Apple has always been in demand there. Outside of China, however, Apple has controlled the smartphone industry for a long time, and the whole globe often lies in anticipation of their next news conference and the release of their next iPhone. For many years, market supremacy was exchanged between Apple and Samsung, with Samsung ruling the majority of the time.

However, the worldwide smartphone market has shifted recently. With such strong competition (Samsung, Xiaomi, Huawei) on the horizon, as well as Apple's extremely expensive pricing for its current products, Chinese competitors have adapted and established a stable market hold for the foreseeable future. Chinese smartphone manufacturers are now a serious rival for the established giants, offering the similar minimalist design approaches that Apple is renowned for, as well as entirely redesigning their marketing efforts. Finally, the US and EU markets are the most significant target markets for Chinese smartphones.

However, there seem to be severe privacy concerns that are impeding Chinese smartphones and their image.

What is the issue with Chinese smartphones?

There are a number of Chinese companies that are now producing smartphones on the market, with Huawei and Xiaomi being the most well-known and popular brands in countries other than China. The majority of customers may not be acquainted with some of the other "cheaper" businesses, such as Honor and Realme. There are a great number of other Chinese smartphone manufacturers, perhaps too many to list here.

What difference does it make whether you want to buy a Chinese smartphone or if you already own one, given the amount of political tension that exists between the United States and China? Unhappily, Chinese smartphones have been afflicted with a number of privacy and security issues, which may be broken down into the following categories:

•  Spyware already installed

•  Vulnerabilities when it comes to malware

•  Data theft

•  "Backdoors" in Hardware

•  Encryption-related flaws

Moreover, there are extra hazards involved with downloading particularly popular Chinese social networking applications, in addition to the malware that comes pre-installed on Chinese devices. Some examples of these risks include:

•  TikTok

•  WeChat

Conclusions for your smartphone's overall security

Let's not forget, now that we've covered the reasons why there is such a lot of bad buzz about Chinese smartphones and the privacy issues they pose, that a large part of this has to do with the political tensions that exist between China and the United States. Allegations of espionage, hacking, and danger to data have been made an extremely high number of times. In addition to that, there is an additional fact that is more significant for the typical user. Android, which has a far bigger user population and is thus more vulnerable to assaults because of the size of its user base, is the foundation upon which Chinese phones are built.

Let us highlight one thing: certainly, it is difficult to declare that these technologies are safe; but, the question is: what really is safe in this day and age? Should this make you, the regular person, think twice about purchasing a smartphone made in China? It is difficult to say what constitutes "security" at this time, and whether or not governments will try to gain access to your phone depends heavily on who you are and how sensitive your data is.

However, if you are concerned about your privacy, there are a few steps you should take for your own protection and peace of mind, regardless of the device you are using or the nation in which it was manufactured; the following is a list of these steps, which you may read below:

•  Always utilize a reputable virtual private network (VPN)

•  Consider the possibility that iOS is more secure than Android in general

•  Make sure your phone is protected by a strong password

•  Ensure that multi-factor authentication is used at all times

•  If at all possible, avoid sharing critical information online

•  Keep your smartphone's software up to date at all times

•  Never use suspicious applications or access third-party app marketplaces

How secure are Chinese smartphones?

Oct 16, 2022 — 4 min read

Most of web3's security is based on the blockchain’s unique ability to be resistant to human intervention. However, because of the associated feature of finality, where transactions are generally irreversible, these software-driven networks are an attractive target for attackers. Likewise, as the value of blockchains — the distributed computer networks that underpin Web3 — grows, they become increasingly appealing.

While web3 differs from previous web iterations, we have seen similarities with prior software security patterns. In many cases, the most serious issues stay unchanged. Advocates, whether they are builders, security teams, or everyday crypto users, can better secure themselves, their projects, and their wallets by learning these areas. Based on people's experiences, we've compiled a list of recurring themes and predictions.

Chase the money

Typically, attackers seek to maximise their return on investment. Because the potential return is bigger, they may devote more time and effort to attacking protocols with a higher "total value locked," or TVL for short.

Hacking groups with the highest amounts of resources are more likely to target high-value systems. New, more valuable exploits are also more likely to target these important targets.

Low-cost assaults, such as phishing, will never go away, and we expect them to become more prevalent in the near future.

Fixing a hole

As developers learn from tried-and-true assaults, they can improve web3 software to the point where it is "safe by default." This frequently entails tightening up application programming interfaces (APIs) to make it more difficult for people to add vulnerabilities by mistake.

Because security is always a work in progress, and nothing is ever immune to hacking, defenders and developers may make attacks more expensive by removing most of the low-hanging fruit for attackers.

The success of the following attacks may be considerably reduced as security policies and tools improve: control attacks, price oracle manipulation, and re-entry problems.

Platforms that cannot provide "perfect" security will have to employ exploit mitigation methods to decrease the possibility of losses. This can deter attackers by lowering the "benefit" or possible benefit component of their cost-benefit analysis.

Attack classification

Attacks on various systems can be categorised based on their similarities. The sophistication of the attack, the extent to which attacks can be automated, and the preventive measures available to fight against them are all defining aspects.

The following are some of the types of assaults that users have observed in the most recent hacks. We've also included our thoughts on the current threat landscape and what we anticipate from web3 security in the future.

Top predators in APT Operations

Advanced attackers, often known as advanced persistent threats (APTs), are a security nightmare. Their motivations and talents vary significantly, but they are usually well-endowed and, as the term suggests, persistent; unfortunately, they are likely to constantly be present. APTs carry out a wide range of operations, but these threat actors are the most likely to actively assault a company's network layer to achieve their objectives.

We know that certain advanced groups are actively pursuing web3 initiatives, and assume that there are others who have yet to be discovered. The people behind the most serious APTs typically reside in countries with no extradition accords with the US and EU, making it harder to punish them for their actions. Lazarus Group, a North Korean gang responsible for the greatest сryptocurrency heist on record, is one of the most well-known APT attackers.

We anticipate that APTs will continue to operate as long as they can monetize their activities or achieve various political objectives.

Social engineers engage in customer phishing

Phishing is a well-known and prevalent issue. Phishers attempt to trick their victims into falling into a trap by delivering bait messages over numerous channels such as instant messengers, email, Twitter, Telegram, Discord, and compromised websites. If you look through your spam folder, you're sure to find hundreds of efforts to deceive you into disclosing personal information or stealing money.

Phishing efforts are targeting web3 users now that it allows people to directly exchange assets like tokens or NFTs quickly. These assaults are the simplest way for persons with little to no technical knowledge to profit from cryptocurrency theft. They remain, however, a viable technique for organised teams with serious goals or advanced groups looking to undertake large-scale wallet-emptying attacks, such as website hijacking.

We anticipate a rise in these attacks because phishing is inexpensive and phishers seek to adapt to and circumvent the most recent security features. Increased education and awareness, better filtering, clearer warning banners, and tighter wallet restrictions can all help to improve user protection.

Third-party software libraries expose a significant surface for attack. This has long been a security concern for pre-Web3 systems, as evidenced by the log4j hack that compromised a popular web server’s software in December. Attackers will search the Internet for known vulnerabilities in order to locate unpatched flaws to attack.

Although the imported code was not built by your engineering team, it must be maintained. Teams must keep an eye out for vulnerabilities in their software components, ensuring that updates are deployed, while staying up to speed on the dynamics and progress of the projects on which they rely. The real and immediate cost of exploiting web3 software vulnerabilities makes communicating these issues to library users challenging. The decision on how and where the teams communicate this in a way that does not mistakenly jeopardise users' monies is still pending.

We expect Supply Chain Vulnerabilities to rise as the dependency and complexity of software systems grow. Random hacking assaults are expected to rise as well until solid, standardised ways for exposing web 3 security flaws are created.

Web3 Security: Types of attack

Oct 14, 2022 — 4 min read

GPS devices have been made accessible to a wider market as technology advances, and the degree to which our daily lives rely on precise location and timing has also increased. For tourists to navigate effectively from one location to another, the use of a global positioning system (GPS) has become standard.

Businesses and people now have access to possibilities that were previously unavailable because of GPS. On the other hand, this is not always a positive thing since spoofing might make GPS systems susceptible to cyber assaults. Let's find out the main things about spoofing and how to keep your GPS safe.

How does GPS spoofing work and what is it?

GPS signal spoofing occurs when an attacker imitates the original GPS signal by substituting a phoney GPS satellite signal. The "false" signal indicates a change in location, navigation, or time to the recipient.

Have you ever driven to the local mall, but your GPS said that you were at the library? If your GPS has ever told you that you are at an incorrect location, you have likely been the victim of GPS signal spoofing.

How does it work?

To understand how spoofing works, we must first understand how global navigation satellite systems operate. The satellites transmit communication signals to our devices while orbiting the Earth in a medium earth orbit at a height of approximately 20,400 kilometres.

Satellite signals are sometimes rather weak as they must travel such a long distance to reach your device. GPS communications are not encrypted and may be read. As a result, they are an apparent target for anybody wishing to record, transmit, or modify them.

The terrestrial radio transmitter imitates GPS signals with a signal strength that exceeds what the genuine system can handle in a GPS spoofing attack. This replaces authentic GPS signals with fake ones.

But how can a GPS signal be tampered with? This usually includes the utilisation of a GPS spoofing device or spoofing technology, such as an app. They change GPS signals; to spoof, the transmitter must be near the GPS-enabled device. It then imitates the signal to fool the GPS receiver into reporting a different location.

Spoofing technology was formerly difficult to get a hold of. It was once a costly technology only accessible to the military. Now, a transmitter of this kind is already widely accessible. GPS jammers can be found online for as little as 100 USD. As a result, nearly anybody can impersonate GPS signals.

Who falsifies GPS signals and why?

Any satellite navigation-based technique is susceptible to spoofing. The technique of spoofing is practically free, readily accessible, and immensely popular. Virtually everyone uses spoofing, from privacy advocates to Uber drivers, and teenagers.

Since GPS is essentially accessible to everyone, its security has become a big problem. There are several reasons to alter the GPS signal. These consist of:

•  Accessibility to country characteristics

Some individuals use spoofing to alter their device's receivers in order to get access to country-restricted material, services, games, applications, and even television programs and movies.

For instance, certain programs on Hulu, Netflix, and other streaming services are only accessible in particular regions. Since it is impossible to fly to another country in order to view programs, spoofing allows you to modify your true location and access country-restricted content. Many individuals utilise VPNs for this reason.

•  For military purposes

Initial plans were for the military to use GPS equipment. Ironically, the military was the first to falsify GPS. The majority of armed forces may utilise GPS to simulate their position and conceal their activity. For tactical navigation, guided weaponry, and command and control operations, the military may also perform GPS spoofing assaults.

•  To avoid motion tracking and conceal locations

Numerous individuals use spoofing to generate a false GPS position, preventing applications from precisely tracking their activities. Most individuals use this to keep some sense of control over their data by instructing their applications to show an incorrect location.

Additionally, teenagers use spoofing to conceal their whereabouts from their parents. This is how easy spoofing has become.

•  To conceal unlawful conduct

Criminals may also employ spoofing to conceal fraudulent acts such as kidnappings, car thefts, and evidence tampering, or to induce public panic by causing accidents by interfering with automobiles. They may even fake a GPS to send victims to online or physical danger zones.

GPS safety suggestions

Here is some advice on how to prevent GPS spoofing attacks:

•  Install phone antennae

Install bogus antennae in a visible location, away from the genuine ones. This guarantees that spoofing attacks do not disrupt real transmissions. A reasonable distance should be at least 300 metres.

•  Carefully consider where to place your antenna

The antenna's optimum placement should offer an unobstructed view of the sky. Signals from the ground or neighbouring public areas are blocked by buildings and other objects.

Install antennas in areas where they are not visible to the general public, or use barriers such as plastic fencing to hide their position while not interfering with GPS signals.

•  Follow internet hygiene guidelines

Individuals and companies should change and update passwords regularly, install security patches and updates, utilise firewalls and virus protection, and consider adopting multi-factor authentication and other cyber defences to avoid spoofing attacks.

•  Turn off any GPS-enabled gadgets that are not in use

Individuals and businesses that utilise GPS-enabled devices should keep them turned off when not in use. This will keep spoofing attempts at bay.

Install two or more antennae at opposite ends of a building or ship to identify faults and switch to backup navigation systems instantly.

GPS monitoring and location sharing offer significant privacy risks. GPS spoofing may be very dangerous for people, corporations, and governments. Regardless, it enables users to safeguard themselves against security risks and dangers. So, a balance must be achieved.

How secure is GPS?

Oct 1, 2022 — 50 min read

1. Basic Information about SSL

1.1 What Are ‘Certificates’ and Why Are They Needed?

Certificates are text files on a web server, the placement and content of which confirms the identity of the responsible owner of a web resource. Owner confirmation is carried out by specially authorized companies or divisions of an organization – Certification Centers (also referred to as the CC, Certificate Authority, CA).

Additionally, certificates contain the public key required to establish an encrypted connection to work on a network in order to prevent data interception by intruders. The protocols by which this connection is established end with the letter "S", from the English word "Secure" — see HTTP(S), FTP(S), etc. This means that standard internet protocols, such as HTTP and FTP, are used over an encrypted TLS connection, whereas ordinary messages are exchanged over TCP/IP without encryption. TLS (which stands for Transport Layer Security is a protocol that ensures secure data transfer based on SSL (Secure Sockets Layer), which is another cryptographic protocol. This uses asymmetric cryptography to authenticate exchange keys so that a session can be established, symmetric encryption to further preserve the confidentiality of the session, and the cryptographic signature of messages to guarantee the delivery of information without loss. Despite the fact that it is the only TLS protocol that is actually used, due to habit, the entire family of these protocols is called SSL, and the accompanying certificates are SSL certificates.

The use of SSL certificates primarily allows you to prevent data theft by using clones of sites of well-known services, when attackers duplicate the main pages of said sites, employ similar domain names, and forge personal information forms. The user may input personal information about themselves, their documents, and payment details on fake websites. As a result, users' personal information may subsequently be used to gain unauthorized access to other resources or social networks so it can be resold, or used to steal funds from a bank account. Service owners can help customers avoid these problems by configuring HTTPS on their resource and demonstrating the authenticity of their web pages to their users directly in the browser address bar.

As mentioned above, TLS/SSL is used to encrypt traffic from the client to the web server, and this prevents intruders from intercepting traffic on public unsecured networks.

1.2 How Do They Work?

When it comes to TLS /SSL, three parties are involved: the client – the consumer of services or goods on the internet; the server – the provider of these services or goods; and the Certification Center, whose duties include ensuring that the domain name and resource belong to the organization specified in the registration information of the certificate.

The TLS/SSL algorithm works as follows:

1.   The owners of the service contact the Certification Center through partners and provide information about themselves.

2.   The Certification Center makes inquiries about the owners of the service. If the primary information is verified, the Certification Center issues the owners of the service with a certificate which includes the verified information and a public key.

3.   The user launches a browser on a personal device and goes to the service page.

4.   The browser, along with other standard operations, requests the SSL certificate while the service page is loading.

5.   The service sends the browser a copy of the certificate in response.

6.   The browser checks the validity period and validity of the copy of the certificate using the Certificate Centers’ pre-installed root certificates. If everything is approved, the browser sends the corresponding response to the service, signed with the client's key.

7.   The service receives confirmation of the client’s verification with their digital signature and they begin an encrypted session.

Session encryption is carried out using PKI (Public Key Infrastructure). PKI is based on the following principles:

1.   There is a related pair of non-interchangeable control sequences of almost random characters called keys: public or public and private, also referred to as private.

2.   Any dataset can be encrypted with a public key. Because of this, the public key can be freely transmitted over the network, and an attacker will not be able to use it to harm users.

3.   The private key is known only to its owner and can decrypt the received data stream into structured information that has been encrypted with a public key paired with it. The private key should be stored on the service and used only for local decryption of messages that have been received. If an attacker is able to gain access to a private key, then procedures for revoking and reissuing the certificate must be initiated to make the previous certificate useless. A leak of a private key is called a compromise.

An SSL certificate from a Certificate Authority is one way of distributing a server’s public key to clients in unsecured networks. After verifying the validity of the certificate, the client encrypts all outgoing messages with the public key attached to the certificate and decrypts incoming messages with the private one, thereby ensuring a secure communication channel.

1.3 Who Releases Them?

Certificates are issued by Certification Centers upon the request of customers. The Certification Center is an independent third–party organization that officially verifies the information specified in a certificate request: i.e. whether the domain name is valid, whether a network resource with this name belongs to a specific company or individual to whom it is registered; whether the site of the company or individual to whom the SSL certificate was issued is genuine, and other checks. The most famous international Certification Centers are Comodo, Geotrust, GoDaddy, GlobalSign, Symantec. The root SSL certificates of these Certification Authorities are pre-installed as trusted in all popular browsers and operating systems.

It is often more cost-effective to purchase certificates not directly from the Certification Center but from their partners instead, as they offer wholesale discounts. In Russia, many companies and hosting providers that have their own tariffs for the SSL certificate service sell certificates from well-known Certification Centers.

2. Advanced Information about Certificates

2.1 Which Crypto Algorithms Are Used?

The following algorithms are used to establish a secure connection:

  • Encryption algorithm
  • Hashing algorithms
  • Authentication algorithms

The most commonly used encryption algorithms for cryptographic operations in TLS/SSL are combinations of the algorithms RSA (an initialism of the names of the creators Rivest, Shamir and Adleman), DSA (which stands for Digital Signature Algorithm, patented by the National Institute of Standards and Technology of the USA) and several variations of the Diffie–Hellman algorithm or DH, such as a one-time DH (Ephemeral Diffie–Hellman, EDH) and DH based on elliptic curves (Elliptic curve Diffie–Hellman, ECDH). These Diffie-Hellman variations, unlike the original algorithm, provide progressive secrecy, i.e. when previously recorded data cannot be decrypted after a certain amount of time — even if it was possible to obtain the server's secret key — because the original parameters of the algorithm are generated again when the channel is re-established after a forced break when the connection has timed out.

Hashing algorithms are based on a family of mathematical functions for calculating the hash SHA (Secure Hash Algorithm). The hash function allows you to convert the original data array into a string of a certain length, and this length determines the amount of processing time and the computing power required. All encryption algorithms today support the SHA2 hashing algorithm, most often SHA-256. SHA-512 has a similar structure, but in it the word length is 64 bits rather than 32, the number of rounds in the cycle is 80 rather than 64, and the message is divided into blocks of 1024 bits rather than 512 bits. Previously, SHA1 and MD5 algorithms were used for the same purpose, but today they are considered vulnerable to attack. Modern services use keys 64 bits long and higher. The current version of the SHA-3 algorithm (Keccak), uses an amount necessary to verify the integrity of the transmitted data — MAC (Message Authentication Code). The MAC uses the mapping function to represent message data as a fixed length value, and then hashes the message.

In modern versions of the TLS protocol, HMAC is used (Hashed Message Authentication Code), which uses a hash function immediately with a shared secret key. This key is transmitted along with the flow of information, and to confirm authenticity, both parties must use the same secret keys. This provides greater security.

The General Algorithm of SSL Operation

1.   Handshake protocol. The connection confirmation (handshake) protocol is the order of operations performed directly during the initialization of the SSL connection between the client and the server. The protocol allows the server and client to carry out mutual authentication, determine the encryption algorithm and MAC, as well as secret keys to protect data during a further SSL session. The handshake protocol is used by participants at the stage before data exchange. Each message transmitted as part of the handshake protocol contains the following fields:

  • Type is the category of messages. There are 10 categories of messages.
  • Length refers to the length of each message in bytes.
  • The content is the message itself and its parameters.

During the handshake, the following stages take place:

1.1 Determination of supported algorithms. At the first stage, the connection between the client and the server is initiated and the encryption algorithms are selected. First, the client sends a welcome message to the server, before entering response-waiting mode. After receiving the client's welcome message, the server returns its own welcome message to the client to confirm the connection. The client's welcome message includes the following data:

  • The maximum SSL version number that the client can support
  • A 32-byte random number used to generate the master secret
  • Session ID
  • A list of cipher suites
  • A list of compression algorithms

The format of the list of cipher suites is as follows:


Wherein lies:

  • The name of the protocol, for example, "SSL" or "TLS".
  • Key exchange algorithm (with an indication of the authentication algorithm).
  • The encryption algorithm.
  • Hashing algorithm. For example, the entry  "SSL_DHE_RSA_WITH_DES_CBC_SHA" means that the fragment "DHE_RSA" (temporary Diffie-Hellman with RSA digital signature) is defined as a key exchange algorithm; the fragment "DES_CBC" is defined as an encryption algorithm; and the fragment "SHA" is defined as a hashing algorithm. As will be discussed later in TLSv1.3, the key exchange and encryption protocols are combined into an authenticated encryption algorithm with attached data (AEAD), so the entry there will be shorter. Example: TLS_AES_256_GCM_SHA384. The server response includes the following fields:
  • The SSL version number. On the client side, the lowest version number supported by the client and the largest version number supported by the server are compared. Depending on the server’s settings, selection priority can be given to either the client or server.
  • A 32-byte random number used to generate the master secret.
  • Session ID.
  • A set of ciphers from the list of ciphers supported by the client.
  • Compression method from the list of compression methods supported by the client.

1.2 Server authentication and key exchange

At the second stage, all messages are sent by the server. This stage is divided into 4 steps:

  • The sending of a digital certificate to the client so they can use the server's public key for authentication purposes.
  • Key exchange on the server. Depending on the established algorithm, this step may be skipped.
  • Client certificate request. Depending on the settings, the server may require the client to send their own certificate.
  • A message confirming that the server authentication and key exchange stage is complete, before moving on to the next stage.  

1.3 Client authentication and key exchange:

At the third stage, all messages are sent by the client. This stage is divided into 3 steps:

  • The sending of the certificate to the server — if the server requested it (this depends on the established algorithm). If the algorithm includes this, the client can authenticate on the server. For example, in IIS, you can configure mandatory authentication of the client certificate.
  • Client key exchange (Pre-master-secret) – the sending of the master key to the server, which will later be encrypted using the server key. The client knows the master key and in case of server substitution will be able to terminate the connection.
  • Signing a random number to confirm ownership of the certificate's public key. This stage also depends on the algorithm chosen.

1.4 Server shutdown

At the fourth stage, messages are exchanged directly and errors are monitored. If an error is detected, the alarm protocol comes into effect. This stage consists of exchanging session messages: the first two messages come from the client, and the last two come from the server.

2.   The Key Generation Process

To ensure the integrity and confidentiality of information, SSL requires six encryption secrets: four keys and two values of the initialization vector (IV, see below). The information’s authenticity is guaranteed by an authentication key (for example, HMAC). The data is then encrypted by a public key, and data blocks are created based on IV. The keys required by SSL are unidirectional, so when a client is hacked, the data obtained cannot be used to hack the server.

3.   Record Agreement (Record Protocol)

The recording protocol is used after a connection between the client and the server has been successfully established, and when the client and server have passed mutual authentication and have determined the algorithm they will use to exchange information about the algorithms used. The recording protocol implements the following functions:

  • Confidentiality by using the secret key defined at the handshake stage;
  • Integrity by analyzing the MAC defined at the handshake.

4.   Alarm Protocol

When the client and server detect an error, they send a message recognizing this. If it is a critical error, the algorithm immediately closes the SSL connection, and both sides first delete the session details: the identifier, secret, and key. Each error message is 2 bytes long. The first byte indicates the type of error. If the connection fails, the value is 1, while if a critical error is detected, it is 2. The second byte indicates the nature of the error.

2.2 Versions of SSL (SSL, TLS) — and How They Differ

During the initial installation of a secure connection between the client and the server, the protocol is selected from those supported by both sides from the set of SSLv3, TLSv1, TLSv1.1, TLSv1.2 or TLSv1.3.

Earlier versions of the SSL protocol are not used. The SSLv1 version was never made public. The SSLv2 version was released in February 1995, but it contained many security flaws that led to the development of SSLv3. Various IT companies have begun to attempt to implement their own versions of secure data transfer protocols. In order to prevent disunity and monopolization in the field of network security, the international community of designers, scientists, network operators, and providers (The Internet Engineering Task Force [IETF]), which was created by the Internet Architecture Council in 1986, is involved with developing protocols and organizing the internet, specifically regarding the standardized TLS protocol version 1, slightly different from SSL 3.0.

The technical details of the protocol are recorded by the release of a document called RFC (Request for Comments, working proposal). These documents can be found on the IETF website: , where XXXX is a four-digit RFC number. Thus, the TLSv1 version is fixed in RFC 2246, the TLSv1.1 version is fixed in RFC 4346, the TLSv1 version.2 in RFC 5246, and the TLSv1 version.3 in RFC 8446. In addition, RFC 3546 defines several extensions for cases when TLS is used in systems with limited bandwidth, such as wireless networks; RFC 6066 defines a number of additional TLS changes made to the extended client greeting format (presented in TLSv1.2); RFC 6961 defines a method for reducing traffic when a client requests information about the status of a certificate from the server; and, finally, RFC 7925 defines what happens to TLS (and DTLS) when it is used in IoT (Internet Of Things) to exchange data between hardware and other physical objects without human intervention.

As mentioned above, the TLSv1 protocol was released as an update to SSLv3. RFC 2246 states that "the differences between this protocol and SSLv3 are not hugely significant, but they are significant enough to exclude interaction between TLSv1 and SSLv3."

In contrast to the TLS Version 1.0, the TLSv1.1 protocol provides:

  • Added protection against attacks using CBC (Cipher Block Chaining), when each block of plaintext is associated with the previous block of ciphertext before encryption.
    1.   The implicit initialization vector (the original pseudorandom number initiating the calculation of the further cipher, IV) was replaced by an explicit one which is not secret, but nonetheless cannot be predicted in a reasonable timeframe.
    2.   A change in the handling of block filling errors when a data packet is expanded to a fixed block size.
  • Support for registering server IP address parameters and other network information.

The TLS 1.2 protocol is based on the TLS 1.1 specification. This is the most common at the moment. The main differences include:

  • The combination of MD5–SHA-1 hashing algorithms in a pseudorandom function (PRF) has been replaced by the more secure SHA-256, with the possibility of using a set of ciphers, the specified function.
  • The hash size in the finished message has become at least 96 bits.
  • The combination of MD5–SHA-1 hashing algorithms in the digital signature has been replaced by a single hash agreed upon during the handshake, which is SHA-1 by default.
  • The implementation of the function of selecting encryption and hashing algorithms for the client and server.
  • The extension of support for authenticated encryption ciphers used mainly for Galois/Counter mode (GCM) and CCM mode for Advanced Encryption Standard (AES).
  • The addition of TLS extension definitions and AES cipher suites.
  • The ending of backward compatibility with SSLv2 as part of the 6176 RFC. Thus, TLS sessions have ceased to negotiate the use of SSL version 2.0.

The TLS 1.3 protocol is based on the TLS 1.2 specification. Internet services are gradually transitioning to this protocol. The main differences include:

  • The separation of key matching and authentication algorithms from cipher suites.
  • The ending of support for unstable and less-used named elliptic curves.
  • The ending of support for MD5 and SHA-224 cryptographic hash functions.
  • The need for digital signatures even when using the previous configuration.
  • The integration of the HMAC-based key generation function and a semi-ephemeral DH sentence.
  • The introduction of support for a one-time resumption of the receive-transmit session (Round Trip Time or 1-RTT) handshakes, and initial support for zero time for resuming the receive-transmit session (the name of the 0-RTT mode).
  • Session keys obtained using a set of long-term keys can no longer be compromised when attackers gain access to them. This property is called perfect direct secrecy (PFS) and is implemented through the use of ephemeral keys during the DH key agreement.
  • The ending of support for many insecure or outdated functions, including compression, renegotiation, ciphers other than AEAD-block encryption modes (Authenticated Encryption with Associated Data), non-PFS key exchange (including static RSA key exchange and static DH key exchange), configurable EDH groups, elliptic curve point ECDH format negotiation, encryption modification specification protocol, UNIX time welcome message, etc.
  • The prevention of SSL or RC4 negotiation that was previously possible to ensure backward compatibility.
  • The ceasing of use of a record-level version number and fixing the number to improve backward compatibility.
  • The addition of the ChaCha20 stream cipher with the Poly1305 message authentication code.
  • The addition of digital signature algorithms Ed25519 and Ed448.
  • The addition of the x25519 and x448 key exchange protocols.
  • The addition of support for sending multiple responses to the Online Certificate Status Protocol, OCSP.
  • The encryption of all confirmations of receiving and transmitting a block of data after calling the server.

2.3 What Is PKI (Public Key Infrastructure)?

Public Key Infrastructure (PKI) is a system of software, hardware and regulatory methods that solve cryptographic tasks based on a pair of private and public keys. The PKI is based on the exclusive trust of the exchange participants in the certifying center in the absence of information about each other. The certifying center, in turn, confirms or refutes the ownership of the public key to the specified person who owns the corresponding private key.

The main components of PKI:

  • The certifying center or Certification Center is an organization that performs, among other things, legal verification of data on participants in a network interaction (client or server). From a technical point of view, the Certification Center is a software and hardware complex that manages the lifecycle of certificates, but not their direct use. It is a trusted third party.
  • A public key certificate (most often just ‘certificate’) consists of client or server data and public key signed with the electronic signature of the Certifying Center. The issuance of a public key certificate by a Certification Authority ensures that the person specified in the certificate also owns the private part of a single key pair.
  • Registration Center (RC) is an intermediary of the Certification Center that acts on the basis of trust in the root Certification Center. The Root Certification Center trusts the data received by the Registration Center while verifying the information about the subject. After verifying the authenticity of the information, the Registration Center signs it with its own key and transmits the data it has received to the root Certification Center. The Root Certification Authority verifies the registration authority’s signature and, if successful, issues a certificate. One Registration Center can work with several Certification Centers (in other words, it can consist of several PKIs), just as one Certification Center can work with several Registration Centers. This component may not be present in the corporate infrastructure.
  • Repository – a repository of valid certificates and a list of revoked certificates that are constantly updated. The list of revoked certificates (Certificate Revocation List, CRL) contains data on issued certificates whose paid period or validity period have elapsed, as well as certificates of resource owners that have been compromised or have not been authenticated. In the Federal Law of the Russian Federation No. 63 "On Electronic Signatures", this component is called the register of signature key certificates.
  • A Certificate Archive is a repository of all certificates ever issued (including expired certificates) within the current PKI. The certificate archive is used for security incident investigations, which include verifying all data that has ever been signed.
  • The Request Center is the personal account of the Certification Center’s clients, where end users can request a new certificate or revoke an existing one. It is implemented most often in the form of a web interface for the registration center.
  • End users are clients, applications, or systems that own a certificate and use the public key management infrastructure.

3. How the Browser Works with SSL Certificates

3.1 What Happens in the Browser When the Certificate Is Checked?

Regardless of any extensions, browsers should always check a certificate’s basic information, such as the signature or the publisher. Steps for verifying Certificate Information:

1.   Checking the integrity of the certificate. This is done with the cryptographic Verify operation with a public key. If the signature is invalid, then the certificate is considered fake: it has been modified after it was issued by a third party, so it is rejected.

2.   Verifying the validity of the certificate. This is done with the cryptographic Decrypt operation, and by reading the accompanying information. The certificate is considered valid as long as the period for which the client has paid has not elapsed, or the expiration date has not passed. The expiration date of the certificate is the length of time for which the owner’s identity is validated by the Certifying Center that issued the certificate. Browsers reject any certificates with an expiration date that has expired before or started after the date and time of verification.

3.   Checking the certificate revocation status. This is done with the cryptographic Decrypt operation, and loading and reconciliation with CRL. A number of circumstances, for example, law enforcement agencies’ appeals, the identification of a change in the source information or confirmation of the fact that the server's private key has been compromised, can make the certificate invalid before its expiration date. To do this, the certificate is added to the CRL on the side of the Certifying Center.

Certification authorities periodically release a new version of the signed CRL, and it is distributed in public repositories. Browsers access the latest version of the CRL when verifying the certificate. The main drawback of this approach is that it limits verification to the CRL issuance period. The browser will be informed of the revocation only after it receives the current CRL. Depending on the policy of the signing Certification Authority, the CRL update period can be calculated in weeks.

When working with TLSv2 and TLSv3, the browser can use the OCSP Network Certificate Status detection protocol described in RFC 6960. OCSP allows the browser to request the revocation status of a particular certificate online (the reply operation). If the OCSP is configured correctly, the verification of certificates in the CRL is much faster and avoids the use of actually revoked certificates until the next CRL update. There is an OCSP Stapling technology that allows you to include a copy of the response to the certificate status request from the Certifying Center in the headers of the HTTP responses of the web server, which in turn increases the performance and speed of data exchange.

4.   Verification of the certificate publisher by the certificate chain.

Certificates are usually associated with several Certification Authorities: the root authority, which is the owner of the public key for signing certificates, and a number of intermediary ones, which refer to previous owners of the public key all the way up to the root one.

Browsers check the certificates of each Certifying Authority for being in the chain of trust with the root at the head. For added security, most PKI implementations also verify that the public key of the Certifying Authority matches the key with which the current certificate was signed. Thus, self-signed certificates are determined, because they have the same publisher only on the server where they were issued, or were added to the list of root certificates.

The X.509 v3 format allows you to determine which chain certificates should be checked. These restrictions rarely affect the average Internet user, although they are quite common in corporate systems at the development and debugging stage.

5.   Checking the domain name restriction

The certification authority may restrict the validity of the certificate on a server with a specific domain name or a list of the organization's child domains. Domain name restrictions are often used for intermediate Certification Authority certificates purchased from a publicly trusted Certification Authority to exclude the possibility of issuing valid certificates for third-party domains.

6.   Checking the certificate issuance policy

The Certificate Issuance Policy is a legal document published by the Certification Authority, which describes in detail the procedures for issuing and managing certificates. Certification authorities can issue a certificate in accordance with one or more policies, links to which are added to the information of the issued certificate so that the verifying parties can validate these policies before deciding whether to trust this certificate. For example, restrictions may be imposed on the region or time frame (for the period of technological maintenance of the Certification Center software).

7.   Checking the length of the certificate chain

The X.509 v3 format allows publishers to define the maximum number of intermediate certification authorities that can support a certificate. This restriction was introduced after the possibility of forgery of a valid certificate was demonstrated in 2009 by including a self-signed certificate in a very long chain.

8.   Verifying the public key assignment

The browser checks the purpose of the public key contained in the certificate encryption, signatures, certificate signature and so on. Browsers reject certificates, for example, if a server certificate is found with a key intended only for CRL signing.

9.   Checking the rest of the chain certificates

The browser checks each certificate of the chain. If the verification data was completed without errors, then the entire operation is considered valid. If any errors occur, the chain is marked as invalid and a secure connection is not established.

3.2 How to View Certificate Information and Check that Everything Is Working Correctly

The security certificate can be checked directly in the browser. All modern browsers display certificate information visibly in the address bar. If a secure connection with a web resource is established, a lock icon is displayed on the left of the browser address bar. In case of an error, the crossed-out word "HTTPS" or an open lock icon will be displayed. Depending on the type of browser and its version, the type of icons and behavior when working with SSL certificates may differ. Below are examples of images for different versions of modern browsers:

Google Chrome

Mozilla Firefox


Microsoft Edge

Chrome for Android

Safari for iOS

To view the details of the certificate, click on the lock icon and in the subsequent menu, click on the option that outlines the security details. Information about the certificate will appear after clicking on the appropriate button or information link.

Google Chrome

Mozilla Firefox

Microsoft Edge

Chrome for Android

3.3 A Message that the Browser Does Not Trust the Certificate

Most browsers display a security warning. These warnings inform you that the certificate has not been verified by a trusted certificate authority.

There are a number of reasons why an SSL certificate may be considered invalid in the browser. The most common reasons are:

  • Errors in the certificate chain installation process, the intermediate certificate is missing;
  • The SSL certificate has expired;
  • The SSL certificate is valid only for the primary domain, not for subdomains;
  • A self-signed SSL certificate has been used, or the root certificate of the Certification Authority has not been added to the trusted list on the current device.

4. Certification Centers

4.1 More Details about the Certification Centers

As mentioned above, the main task of the Certification Center is to confirm the authenticity of encryption keys using electronic signature certificates. The overarching operating principle can be described by the phrase "users do not trust each other, but everyone trusts the Certifying Center."

Any HTTPS interaction is based on the fact that one participant has a certificate signed by the Certification Authority, and the other attempts to verify the authenticity of this certificate. Verification will be successful if both participants trust the same Certification Authority. To solve this problem, the Certification Center’s certificates are preinstalled in operating systems and browsers. If the Certification Authority itself has issued a certificate, it is called a root certificate. A certificate issued by a partner of the Certification Authority with which it has a trust relationship is called an intermediate certificate. As a result, a tree of certificates is formed with a chain of trust between them.

By installing the certificate of the Certifying Center in the system, you can trust the certificates that have been signed with it. A certificate (particularly for HTTPS) that is issued but not signed by a root or intermediate Certification authority is called a self-signed certificate and is considered untrusted on all devices where this certificate is not added to the root/intermediate lists.

According to the distribution level of certificates, the Certification Center can be international, regional, and corporate. The public key management infrastructure’s activities are carried out in accordance with the regulations of the appropriate level: i.e. public directives recorded by the international community of Internet users, the legislation of the region, or the relevant provisions of the organization.

The main functions of the certification center are:

  • verifying the identity of future certificate users;
  • issuing certificates to users;
  • revoking certificates;
  • maintaining and publishing lists of revoked certificates (Certificate Revocation List/CRL), which are used by public key infrastructure clients when they decide whether to trust a certificate.

Additional functions of the certification center are:

  • Generating key pairs, one of which will be included in the certificate.
  • Upon request, when resolving conflicts, the UC can verify the authenticity of the electronic signature of the owner of the certificate issued by this UC.

Browsers and operating systems of devices fix the trust of the Certifying Center by accepting the root certificate into their storage – a special database of root certificates of Certifying centers. The storage is placed on the user's device after installing the OS or browser. For example, Windows maintains its root certificate store in operating systems, Apple has a so-called trust store, Mozilla (for its Firefox browser) creates a separate certificate store. Many mobile operators also have their own storage. Regional and corporate should be added either at the stage of software certification in the country, or by contacting the technical support of the organization.

Regional representatives of the world Certification Centers have the authority to make legal requests for the activities of organizations related to the publication of web resources. For corporate Certification Centers, this is not necessary, since they usually have access to the internal information of the organization. For security purposes, Certification Authorities should not issue digital certificates directly from the root certificate transmitted to operators, but only through one or more Intermediate Certificate Authority, ICA. These intermediate Certification Authorities are required to comply with security recommendations in order to minimize the vulnerability of the root Certification authority to hacker attacks, but there are exceptions. For example, GlobalSign is one of the few certification authorities that have always (since 1996) used ICA.

Certificates come in different formats and support not only SSL, but also the authentication of people and devices, as well as certifying the authenticity of code and documents. Within the Russian legislative framework, such activities must be licensed by the FSB, since they are related to cryptographic operations.

The universal algorithm for obtaining a certificate from the Certification Center:

1.   Private key generation
2.   Creation of a certificate signing request (CSR request)
3.   Procurement of a certificate signed by the Certificate Authority’s root certificate after passing the checks
4.   Configuration of the web server for your resource

Since browsers have a copy of the international Certification Authority’s root certificate, as well as a number of intermediate certificates from the chain of trust, the browser can check whether a certificate was signed by a trusted certification authority. When users or an organization create a self-signed certificate, the browser does not trust it as it knows nothing about the organization, so the root certificate of the organization must be manually added to all controlled devices. These certificates will become trusted after this.

4.2 What Are Root Certificates?

A root certificate is a file that contains service information about the Certification Authority. Special software or a library that verifies, encrypts and decrypts information is called a crypto provider (a provider of cryptographic functions). The cryptographer gets access to the encrypted information, thereby confirming the authenticity of the personal electronic signature.

A chain of trust for the certificates is then built based on the certifying center’s root certificate. Any electronic signature issued by the Certifying Center only works if there is a root certificate.

The root certificate stores information with the dates of its validity. The cryptographic provider can also get access to the organization's registry through the root certificate.

4.3 What Is a Certificate Chain?

Historically and technologically, certain Certification Centers are widely recognized among SSL users, and as a result, it was agreed that the certificates they issued would be considered root certificates, and they would always be trusted. Regional Certifying certificates, in turn, can be confirmed by the root Certifying center. In turn, they can confirm other certificates, forming a chain of trust to certificates. The Certifying Center acts as a guarantor-certifier which issues an SSL certificate at the request of the owner of a web resource.

The certificate and the web resource to which it is issued are certified by an electronic digital signature (EDS). This signature indicates who the owner of the certificate is and records its contents, that is, it allows you to check whether it has been changed by someone after it was issued and signed.

The list of certificates of root Certifying centers and their public keys is initially placed in the operating system’s software storage on the users' workstation, in the browser, and in other applications that use SSL.

If the chain of sequentially signed certificates ends with the root certificate, all certificates included in this chain are considered confirmed.

Root certificates located on the user's workstation are stored in a container protected by the operating system from accidental access. However, the user can add new root certificates themselves, and this is a source of potential security problems.

By carrying out certain actions and accessing an attacked workstation, an attacker can include their own certificate among the root certificates and use it to decrypt the data that is received.

The Root Certification Center can be formed by the government of a particular country or the leaders of an organization. In these cases, root Certification Centers will not operate everywhere, but they can nonetheless be used quite successfully in a specific country or within a specific enterprise.

At present, the list of root certification authorities on the user's computer can be automatically changed when updating the operating system, software products, or manually by the system administrator.

Certification centers can issue a variety of SSL certificates linked by what is known as a tree structure. The root certificate is the root of the tree, with the secret key with which other certificates are signed. All intermediate certificates that are at a lower level inherit the degree of trust that the root certificate has. SSL certificates located further down the structure receive trust in the same way from the Certifying Centers located higher up the chain. Using the example of the Comodo Certification Center, the structure of SSL certificates can explained as follows:

1.   The root certificate of the Comodo Certification Authority: AddTrustExternalCARoot

2.   Intermediate Certificates: PositiveSSL CA 2, ComodoUTNSGCCA, UTNAddTrustSGCCA, EssentialSSLCA, Comodo High-Assurance Secure Server CA

3.   SSL certificates for individual domains

5. General Information about Certificate Types

5.1 Paid Trusted Certificates

The purchase of trusted certificates, except in some cases, is a paid service.

5.1.1 Where and How to Buy

In most cases in Russia, web resource hosting companies or partner organizations of international Certification centers provide SSL certificate services. It is possible to purchase certificates directly from Certification Centers, but such certificates are usually more expensive than from partners who purchase them in bulk.

The procedure for purchasing an SSL certificate is no different from purchasing other internet services. It entails:

1.   Selecting a supplier and going to the SSL certificates order page.

2.   Selecting the appropriate SSL certificate and clicking the purchase button.

3.   Entering the name of your domain and selecting the protection option — for one domain or Wildcard certificate for a group of subdomains.

4.   Paying for the service in whichever way is most convenient.

5.   Continue configuring the service in accordance with the following parameters:

a.   The number of domains that the certificate protects (i.e. one or more).
b.   Subdomain support.
c.   The speed of release. Certificates with domain-only validation are issued the quickest, while certificates with EV validation are issued the slowest.
d.   Most Certifiers offer unlimited certificate reissues. This is required if there are mistakes in the organization data.
e.   Warranty – for some certificates there is a $10,000 warranty. This is a guarantee not for the certificate buyer, but rather for the visitor of a site that installs a certificate. If a site visitor with such a certificate suffers from fraud and loses money, the Certification Center undertakes to compensate the stolen funds up to the amount specified in the guarantee. In practice, such cases are extremely rare.
f.   Free trial period – Symantec Secure Site, Geotrust Rapidssl, Comodo Positive SSL, Thawte SSL Web Server certificates have paid certificates. There are also free certificates.
g.   Refund – almost all certificates have a 30-day refund policy, although there are certificates without this.

5.1.2 Approximate Cost

SSL certificates can be separated into different groups based on their properties.

1.   Regular SSL certificates. These are issued instantly and confirm only one domain name. Cost: from $20 per year

2.   SGC certificates. These support customers with increasing the level of encryption. Server Gated Cryptography technology allows you to forcibly increase the encryption level to 128 bits in older browsers that supported only 40 or 56 bit encryption. Cryptography is used to solve this problem, but it cannot cope with the other vulnerabilities present in unsecure browsers, so there are a number of root Certification centers that do not support this technology. Cost: from $300 per year.

3.   Wildcard certificates. They provide encryption of all subdomains of the same domain by mask. For example, there is a domain; if the same certificate must be installed on , and , customers can issue a certificate for * . Depending on the number of subdomains that need the certificate, it may be more cost-effective to purchase several ordinary SSL certificates individually. Examples of wildcard certificates: Comodo PositiveSSL Multi-Domain Wildcard and Comodo Multi-Domain Wildcard SSL. Cost: from $180 per year.

4.   SAN Certificates Subject Alternative Name technology allows customers to use one certificate for several different domains hosted on the same server. Such certificates are also referred to as UCC (Unified Communication Certificate), MDC (Multi-domain certificate) or EC (Exchange Certificate). Generally, one SAN certificate includes up to 5 domains, but this number can be increased for an additional fee. Cost: from $395 per year.

5.   Certificates with IDN support Certificates with national domain support (International Domain Name, such as *.RU, *.CN, *.UK). Not all certificates can support IDN. This must be clarified with the Certification Center. Certificates supporting IDN include:

  • Thawte SSL123 Certificate;
  • Thawte SSL Web Server;
  • Symantec Secure Site;
  • Thawte SGC SuperCerts;
  • Thawte SSL Web Server Wildcard;
  • Thawte SSL Web Server with EV;
  • Symantec Secure Site Pro;
  • Symantec Secure Site with EV;
  • Symantec Secure Site Pro with EV.

As is mentioned above, partners of Certification Centers can provide significant discounts on prices — starting at $10 — or offer service packages.

5.1.3. Certificate Validation

Certificates are divided into the following levels of validation:

1.   DV

Domain Validation, or certificates with domain validation. The certification authority verifies that the client who requests the certificate controls the domain that needs the certificate. A network service for verifying the ownership of WHOIS web resources is used to do this. This type of certificate is the cheapest and most popular, but it is not completely secure, since it contains only information about the registered domain name in the CN field (CommonName is the common domain name of a web resource).

2.   OV

Organization Validation, or certificates with organization verification. The certification center verifies the affiliation of a commercial, non-profit or government organization to the client, who must provide legal information when purchasing. This type of certificate is seen as more reliable, since it meets the RFC standards and also confirms the registration data of the owner company in the following fields:

  • O (Organization – name of the organization);
  • OU (Organizational Unit – name of the organization's division);
  • L (Locality – name of the locality of the organization’s legal address);
  • S (State or Province Name – name of the territorial and administrative unit of the organization’s legal address);
  • C (Country Name – the name of the organization's country).

The certification center can contact the company directly to confirm this information. The certificate contains information about the person that confirmed it, but not data about the owner. An OV certificate for a private person is called IV (individual validation/ individual verification) and verifies the identity of the person requesting the certificate.

3.   EV

Extended validation, or a certificate with extended validation. The Certification Center verifies the same data as the OV, but in accordance with stricter standards set by CA/Browser Forum. CA/Browser Forum (Certification Authority Browser Forum)is a voluntary consortium of certification authorities, developers of Internet browsers and software for secure email, operating systems, and other applications with PKI support. The Consortium publishes industry recommendations governing the issuing and management of certificates. This type of certificate is considered the most reliable. Previously, when using these certificates in a browser, the color of the address bar changed and the name of the organization was displayed. It is widely used by web resources that conduct financial transactions and require a high level of confidentiality. However, many sites prefer to redirect users to make payments to external resources confirmed by certificates with extended verification, while using OV certificates which are secure enough to protect the rest of the user data.

5.1.4. The Setup Process (General Information, What Is CSR?)

To initiate the certificate issuing process, a CSR request must be made. Technically, a CSR request is a file that contains a small fragment of encrypted data about the domain and the company to which the certificate is issued. The public key is also stored in this file.

The CSR generation procedure depends entirely on the software used on your server, and is most often performed using the settings in the administrative panel of your hosting. If your hosting does not provide this, then you can use online services to generate a CSR request, or alternatively you can turn to specialized software, such as OpenSSL, GnuTLS, Network Security Services, etc. After generating the CSR, the private key will also be generated.

To successfully generate a CSR, you need to enter data about the organization that has requested the certificate. The information must be entered in the Latin alphabet. The following parameters are sufficient:

  • Country Name — the country of registration of the organization in two-letter format. For the Russian Federation — RU;
  • State or Province Name — region, region of registration of the organization. For Moscow — Moscow;
  • Locality Name — the city where the organization is registered. For Moscow — Moscow;
  • Organization Name — the name of the organization. For individuals, "Private Person" is indicated;
  • Common Name — the domain name of those who have requested the certificate;
  • Email Address — the administrator’s email address. Acceptable values:
    •   admin@domain_name;
    •   administrator@domain_name;
    •   hostmaster@domain_name;
    •   postmaster@domain_name;
    •   webmaster@domain_name.

5.2. Self-Signed Certificates

Self–signed certificates are SSL certificates created by the service developers themselves. A pair of keys for them is generated through specialized software, for example, OpenSSL. Such a communication channel may well be used for internal purposes, i.e. between devices within your network or applications at the development stage.

5.3. Let’s Encrypt

Let's Encrypt is an Authentication Center that provides free X.509 cryptographic certificates for encrypting HTTPS data transmitted over the Internet and other protocols used by servers on the Internet. The process of issuing certificates is fully automated. The service is provided by the public organization Internet Security Research Group (ISRG).

The Let's Encrypt project was started to translate most of the Internet sites to HTTPS. Unlike commercial Certification centers, this project does not require payment, reconfiguration of web servers, use of e-mail, or the processing of expired certificates. This simplifies the installation and configuration of TLS encryption. For example, on a typical Linux-based web server, you need to run two commands that will configure HTTPS encryption, receive and install a certificate in about 20-30 seconds.

Let's Encrypt root certificates are installed as trusted by major software vendors, including Microsoft, Google, Apple, Mozilla, Oracle and Blackberry.

The Let's Encrypt Certification Authority issues DV certificates with a validity period of 90 days. It has no plans to start issuing OV or EV Certificates, although it began providing support for Wildcard certificates some time ago.

The key to the root certificate of the RSA standard has been stored in the HSM hardware storage since 2015 and is not connected to the network. This root certificate is signed by two intermediate root certificates, which were also signed by the IdenTrust certification authority. One of the intermediate certificates is used to issue sites’ final certificates, while the second is kept as a backup in storage that is not connected to the Internet, in case the first certificate is compromised. Since the root certificate of the IdenTrust center is preinstalled in most operating systems and browsers as a trusted root certificate, the certificates issued by the Let's Encrypt project are verified and accepted by clients — despite the absence of the ISRG root certificate in the trusted list.

The Automated Certificate Management Environment (ACME) authentication protocol is used to automatically issue a certificate to the destination site. In this protocol, a series of requests are made to the web server that seeks a signature for the certificate to confirm the ownership of the domain (DV). To receive requests, the ACME client configures a special TLS server, which is polled by the ACME server using Server Name Indication (Domain Validation using Server Name Indication, DVSNI).

Validation is carried out repeatedly, using different network paths. DNS records are pulled from a variety of geographically distributed locations to prevent DNS spoofing attacks. This is when domain name cache data is changed by an attacker in order to return a false IP address and redirect the intermediary to the attacker's resource (or any other resource on the network)1.

6. Paid Trusted Certificates

6.1 Usage on Windows Server and IIS

6.1.1 What Are the Formats of the Private Key?

These are today’s private key formats:

1.   PEM format

This format is most often used by Certification Authorities. PEM certificates most often have extensions *.pem, *.crt, *.cer or *.key (for private keys) and others. For example, the package file The CA available in the download table in the order of the certificate has the extension *.ca-bundle. The contents of the files are encrypted using Base64 and contain the strings "--BEGIN CERTIFICATE--" and "--END CERTIFICATE--".

This certificate format is common in Linux OS. Multiple PEM certificates and even a private key can be included in one file, one under the other. But most servers, such as Apache, expect the certificate and private key to be in different files.

2.   PKCS#7/P7B format

PKCS#7 or P7B format certificates are usually saved in Base64 ACVII format and have the extension *.p7b or *.p7c. The P7B certificate contains the strings "--BEGIN PKCS7--" and "--END PKCS7--". This format contains only the certificate and certificate chain, but not the private key. Several commonly-used platforms support this format, including Microsoft Windows and Java Tomcat.

3.   PKCS#12/PFX format

PKCS#12 or PFX format is a binary format for saving a certificate, any intermediate certificates, and a private key in one encrypted file. PFX files are usually saved with the extension *.pfx or *.p12. As a rule, this format is used on Windows certificates to export/import the certificate and private key 2.

6.1.2 How to Generate a CSR Request

To generate a CSR request in IIS 10, perform the following operations:

1.   Run IIS from the iis.msc command line or from the visual interface.

2.   Select your server from the Connections list and click the Server Certificates button.

3.   On the Server Certificates page, click the Create Certificate Request link in the Actions block.

4.   In the Request Certificate window of the wizard, fill in the CSR fields and click Next.

5.   In the Cryptographic Service Provider Properties window of the wizard, select the required cryptographic provider, depending on the desired algorithm and the key length, and then click Next.

6.   In the File Name window of the wizard, specify the path to the CSR being created, and then click Finish.

To send the finished CSR to the Certification Center, open the file in a text editor and copy the contents to the web form of the certificate provider.

6.1.3 How to Create a Private Key

As a result of creating the CSR, the private key will be created automatically by IIS. Viewing is available on the Certificates console snap-in in the Personal or Web Hosting points of the certificate tree.

The snap-in can be hidden in the console. To add it, run the mmc command in Start menu > Run and in the window that appears, add the Certificates snap-in to the list available on the local machine:

6.1.4 How to Export It

To export a private key for backup purposes or to configure a new server, follow these steps:

1.   Find the certificate in the Certificates snap-in of the management console, and right-click on it. In the context menu that appears, click on the menu item All Tasks > Export;

2.   In the Welcome to the Certificate Export wizard window of the Certificate Export Wizard, click Next and then in the Export Private Key window, set the switch to Yes, export the private key, and then click Next;

3.   In the Export File Format window of the wizard, select the type item Personal Information Exchange – PKCS #12 (.PFX) and select the checkbox Include all certificates in the certification path if possible. Then click Next. Be aware that if the Delete the private key if the export is successful checkbox is checked, the private key created on the current server will be deleted after export;

4.   In the Security wizard window, fill the Password checkbox and enter the password twice to protect the private key. It will be required for the subsequent import. Additionally, it is recommended that Active Directory users or groups that have the ability to use a private key are restricted. To do this, fill the Group or User Name checkbox and select Required Groups or Users, then click Next;

5.   In the File to Export window of the wizard, specify the path to the exported file with the private key and its name. To do this, enter it manually or use the system file search dialog box, then click Next;

6.   In the File to Export window of the wizard, specify the path to the exported file with the private key and its name. To do this, enter it manually or use the system file search dialog box, and then click Next. In the next window Completing the Certificate Export Wizard, a list of the installed settings will appear. Click Finish. The exported file will appear in the specified directory.

6.1.5 How to Configure SSL on IIS

To configure SSL in IIS, follow these steps:

1.   Run IIS from the iis.msc command line or from the visual interface.

2.   Select your server from the Connections list and click on the Bindings... link in the Actions block.

3.   In the Site Bindings window, click Add.

4.   In the Add Site Bindings window, fill in the following fields and click OK.

  • IP address – select the IP addresses of the servers with which the certificate will be associated from the drop-down list or click the All Unassigned button to associate the certificate with all servers.
  • Port – leave the value 443. This is a standard SSL port.
  • SSL certificate – select the required SSL certificate from the drop-down list.

The setup is finished, you can check the operation of the web service. If the private key is missing, then import it in the Certificates snap-in of the Management console. To do this, select the desired resource and right-click on it. Then, in the context menu that appears, click on the menu item All Tasks > Import, and follow the instructions of the wizard.

6.2 Usage on Linux

6.2.1 How to Create a Private Key

The private key that has been created can be obtained in the interface of the SSL certificate provider after sending the CSR or using specialized software, such as OpenSSL, for example.

Below is a fragment of private key generation in the web interface of the SSL certificate provider.

If the private key was created in the web interface, then the export is carried out by clicking the button there. After clicking on the button, the browser starts downloading the archive with the key file in the desired format.

To create a private RSA key using OpenSSL, one command is enough:

openssl genrsa -out rsaprivkey.pem 2048

This command generates the PEM private key and stores it in the rsaprivkey.pem file. In our example, a 2048-bit key is created, which is suitable for almost all situations.

To create a DSA key, you need to perform two steps:

openssl dsaparam -out dsaparam.pem 2048
openssl gendsa -out dsaprivkey.pem dsaparam.pem

The first step creates a DSA parameters file (dsaparam.pem), which in this case contains instructions for OpenSSL to create a 2048-bit key in step 2. The dsaparam.pem file is not a key, so it can be deleted after the public and private keys are created. In the second step, a private key is generated (dsaprivkey.pem file), which must be kept secret.

To create a file in the PKCS#12 format used in Windows OS, use the following command:

openssl pkcs12 -export -out certificate.pfx -inkey privateKey.key -in certificate.crt -certfile CACert.crt


  • pkcs12 – private key format;
  • export – the operation of exporting the private key to the required format;
  • out – the directory in the file system where the resulting file should be placed;
  • inkey – private key file in PEM format;
  • in – file of the certificate received from the Certifying Center;
  • certfile is a copy of the root certificate and intermediate certificates in the chain. In the example above, they are missing.

6.2.2 How to Generate a CSR Request

To generate a CSR, fill in the suggested fields in the web form of the SSL certificate service provider. The figure above demonstrates an example of this. The set of minimum required fields is the same and is given in the section about CSR description, but some vendors can add their own or change the input method.

To generate CSR using OpenSSL, use the following command:

openssl req -new -key private.key -out domain_name.csr -sha256


  • new – creating a new CSR request by direct input in the console. Without this option, the OpenSSL configuration file data will be used;
  • key – the name of the private key required for generation. If the option is not specified, a new private key will be created according to the default algorithm;
  • out – the path to the CSR file being created;
  • sha256 is an encryption algorithm.

After executing the command, a request to fill in the required fields will appear in the console.

Then send the resulting CSR to the Certifying Center. In response, a personal certificate must be returned.

6.2.3 How to Configure SSL for Apache

Follow these steps to configure SSL in Apache:

1.   Add the personal certificate issued by the Certification Authority, the private key, and the root certificate to the /etc/ssl/ directory — along with the rest of the certificates in the chain.

2.   Open the Apache configuration file with any text editor: vim, for example. Depending on the server OS, the file may be located in one of the following locations:

  • for CentOS: /etc/httpd/conf/httpd.conf;
  • for Debian/Ubuntu: /etc/apache2/apache2.conf;

3.   If you are installing an SSL certificate on an OpenServer, use the path to its root folder. At the end of the file, create a copy of the "VirtualHost" block. Specify port 443 for the block and add the following lines inside:

SSLEngine on
SSLCertificateFile /etc/ssl/domain_name.crt
SSLCertificateKeyFile /etc/ssl/private.key
SSLCertificateChainFile /etc/ssl/chain.crt

4.   Check the Apache configuration before restarting with the command: apachectl configtest, then restart Apache.

6.2.4 How to configure SSL for Nginx

Follow these steps to configure SSL in Nginx:

1.   Open a text editor and add the contents of the personal certificate issued by the Certification Authority, and the root certificate — along with the rest of the certificates in the chain. The resulting file should look like this:

#Your certificate#
#Intermediate certificate#
#Root certificate#

2.   Save the resulting file with the *.crt extension to the /etc/ssl/ directory. Please note: the second certificate should come directly after the first, without any empty lines.

3.   Save the your_domain file.key with the certificate's private key in the /etc/ssl directory.

4.   Open the Nginx configuration file and edit the virtual host of your site that you want to protect with a certificate. Perform the minimum setup for the job by adding the following lines to the file:

server {
listen 443 ssl;
ssl_certificate /etc/ssl/your_domain.crt;
ssl_certificate_key /etc/ssl/your_domain.key;


  • — the domain name of the site;
  • /etc/ssl/your_domain.crt — the path to the file created with three certificates;
  • /etc/ssl/your_domain.key — the path to the file with the private key.

The names of files and directories can be arbitrary.

Additionally, you can configure the operation of the site over HTTP, the type of server cache, the cache update timeout, and the operating time of a single keepalive connection. You can also configure the supported protocols and their level of priority (server set or client set), as well as OCSP responses for certificate validation. Details are given in the Nginx user manual.

5.   For the changes to take effect, restart the Nginx server with the following command:

sudo /etc/init.d/nginx restart

7. Self-Signed Certificates

7.1 Usage on Windows Server and IIS

7.1.1 How to Create a Private Key

You can create a private key with IIS by creating a CSR and then actioning the above instructions.

7.1.2 How to Create a Self-Signed Root Certificate

To generate a self-signed root certificate in IIS 10, perform the following operations:

1.   Run IIS from the iis.msc command line or from the visual interface.

2.   Select your server from the Connections list and click on the Server Certificates button.

3.   On the Server Certificates page, click the Create Domain Certificate link in the Actions block.

4.   In the Distinguished Name Properties window of the Create Certificate wizard, fill in the Common Name field (the server name specified in the browser), the remaining fields that were filled when creating the CSR, and click Next.

5.   In the Online Certification Authority window of the wizard, specify in the Specify Online Certification Authority field the repository where you want to place the root certificate. In the Friendly Name field, specify the name of the certificate, and then click Finish.

7.1.3 How to Create an SSL Certificate Signed by the Root

To generate a self-signed SSL certificate in IIS 10, perform the following operations:

1.   Run IIS from the iis.msc command line or from the visual interface.

2.   Select your server from the Connections list and click on the Server Certificates button.

3.   On the Server Certificates page, click the Create Self-Signed Certificate link in the Actions block.

4.   In the ‘Create Self-Signed Certificate’ window in the ‘Friendly Name’ field, specify the name of the certificate in the ‘Select a Certificate Store for the New Certificate’ field. Then, select the repository in which the self-signed certificate will be stored, and click OK.

7.1.4 How to Configure IIS for a Self-Signed Certificate

IIS configuration for Configuring IIS for a self-signed certificate requires the same process as a certificate issued by a Certification Authority.

7.2 Usage on Linux

7.2.1 How to Create a Private Key

Creating a private key using the genrsa command and other similar ones in OpenSSL is described above.

7.2.2. How to Create a Self-Signed Root Certificate

To generate a self-signed root certificate in OpenSSL, run the following command:

openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem


  • key – a private key created earlier;
  • out – root certificate file;
  • days – the number of days the certificate is valid, starting from the current day.

7.2.3. How to Create an SSL Certificate Signed by the Root

To generate a self-signed SSL certificate in OpenSSL, follow these steps:

1.   Create a CSR according to the instructions above.

2.   Issue a self-signed certificate:

openssl x509 -req -in org.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out org.crt -days 365 -sha256


  • req – create a signature request;
  • in – file of the CSR request;
  • CA file of the root certificate;
  • CAkey – private key of the root certificate;
  • out – output CRT file;
  • days – the number of days of the action.

7.2.4. How to Configure Apache for a Self-Signed Certificate

Apache configuration for a self-signed certificate is performed in the same way as for a certificate issued by a Certification Authority.

7.2.5. How to Configure Nginx for a Self-Signed Certificate

Nginx configuration for a self-signed certificate requires the same process as a certificate issued by a Certification Authority.

7.3 How to Make Self-Signed Certificates Trusted

7.3.1 On Windows

To make a self-signed certificate trusted, follow these steps:

1.   Find the repository of trusted certificates in the Certificates snap-in of the management console. Right-click on it, and then in the Context Menu that appears, click on the menu item All Tasks > Import;

2.   In the Welcome to the Certificate Import wizard window of the Certificate Import wizard, click Next. Then, in the File to Import window, specify the path to the imported file with the self-signed certificate. To do this, either enter it manually or use the system file search dialog box. Afterwards, click Next.

3.   In the Private Key Protection window of the wizard, enter the password specified when creating the self-signed certificate. Set the checkboxes Mark this key as exportable to allow further export of the certificate for backup purposes, and Include all extended properties, then click Next. Further export will only work if the private key is available.

4.   In the Certificate Store window of the wizard, turn on Place all certificates in the following store, select the Trusted Root Certification Authorities repository, and then click Next. In the next window Completing the Certificate Import Wizard, you will see a list of the installed settings. Click Finish. The imported file will appear in the specified repository.

7.3.2 On macOS

To add a self-signed certificate to trusted certificates, follow these steps:

1.   Open the Keychain Access application by clicking on the icon below and go to the All Items menu item.

2.   Use Finder to find the self-signed certificate file (*.pem, *.p12 or other).

3.   Drag the file to the left side of the Keychain Access window.

4.   Go to the Certificates menu item, find the self-signed certificate that has been added and double-click on it.

5.   Click on the Trust button in the drop-down menu and set the When using this certificate field from System Defaults to Always Trust.

7.3.3 On Linux

To add a self-signed certificate to trusted ones in Linux OS (Ubuntu, Debian), follow these steps:

1.   Copy the root self-signed certificate file to the /usr/local/share/ca-certificates/ directory. To do this, run the command sudo cp foo.crt /usr/local/share/ca-certificates/foo.crt, where foo.crt is the personal certificate file.

2.   Run the sudo update-ca-certificates command.

To add a self-signed certificate to trusted certificates in Linux OS (CentOS 6), follow these steps:

1.   Install the root certificates using the command: yum install ca-certificates.

2.   Enable the dynamic configuration mode of root certificates: update-ca-trust force-enable.

3.   Add the certificate file to the directory /etc/pki/ca-trust/source/anchors/: cp foo.crt /etc/pki/ca-trust/source/anchors/.

4.   Run the command: update-ca-trust extract.

7.3.4 On iOS

To add a self-signed certificate to trusted certificates, follow these steps:

1.   Install any web server and place the certificate file in the root of the application directory.

2.   Go to the URL of the web server, after which the file will be downloaded to the profile of the current user.

3.   Open the Profiles menu and click Install.

4.   Go to Settings > General > About-> Certificate Trust Settings and set the switch for the certificate to Enabled.

7.3.5 On Android

To make a self-signed certificate trusted, follow these steps:

1.   Download the file to the device.

2.   Go to Settings > Security > Credential Storage and tap Install from Device Storage.

3.   Find the *.crt that has been downloaded and enter its name in the Certificate Name field. After it has been imported, the certificate will be displayed in Settings > Security > Credential Storage > Trusted Credentials > User.

7.3.6 How to Make a Root Certificate Trusted in Windows AD Group Policies

To make a root certificate trusted in Windows Active Directory Group Policies, follow these steps:

1.   Run the Group management snap-in from the gpmc.msc command line.

2.   Select the desired domain, right-click on it, and select Create a GPO in this domain and link it here.

3.   Specify the name of the group policy in the window that appears and click OK.

4.   Right-click on the created group policy and click Edit.... On the next screen, go to Computer Configuration > Policies > Administrative Templates > Windows Components > Windows Update. Select Allow signed content from intranet Microsoft update service location and click Edit Policy Settings.

5.   Set the switch to Enabled and click OK.

6.   Go to Computer Configuration>Windows Settings >Security Settings>Public Key Policies and trust the required certificate in accordance with the instructions above.

7.   Repeat step 4 and close the Group Policy Editor. The policy will be applied shortly. To apply it immediately, run gpupdate /force on the command line.

8. Let’s Encrypt

8.1 Usage on Windows Server and IIS

8.1.1 How to Issue a Certificate

To install the Let's Encrypt certificate, an ACME client must be installed on the server. The following implementations are common for Windows:

  • The Windows ACME Simple Utility (WACS) is a command–line utility for interactively issuing a certificate and binding it to a specific site on your IIS web server;
  • The ACMESharp Powershell module is a Powershell library. It has many commands for interacting with Let's Encrypt servers via the ACME API;
  • Certify is a graphical SSL certificate manager for Windows that allows you to interactively manage certificates via the ACME API.

To issue a Let's Encrypt certificate using WACS, follow these steps:

1.   Download the latest release of the WACS client from the project page on GitHub and unpack it onto a directory on the server.

2.   Open a command prompt and run the client wacs.exe from the specified location.

3.   Press the N key. This will create a certificate for IIS.

4.   Select the certificate type: DV for one domain, DV for all domains in IIS (SAN), domains corresponding to Wildcard, or a manual list of domains in IIS.

5.   Depending on the choice, WACS.exe will display a list of sites running on the IIS server and will prompt you to select the desired site.

6.   After selecting the site, provide an email address to receive information about problems including site certificate updates (several addresses can be given if they are separated by commas).

7.   Agree to the terms of use by pressing the Y key, after which Windows ACME Simple will connect to Let's Encrypt servers and try to automatically generate a new SSL certificate for the site 3.

8.1.2 How to Configure IIS for Let's Encrypt Certificate

The WACS utility saves the certificate's private key (*.pem), the certificate itself, and a number of other files to the directory C:\Users\%username%\AppData\Roaming\letsencrypt-win-simple . It will then install the generated Let's Encrypt SSL certificate in the background and bind it to your IIS site.

For more details, see here

8.2 Usage on Linux

8.2.1 How to Issue a Certificate

To install the Let's Encrypt certificate, the ACME client must be installed on the server. For Linux, this is the Certbot utility.

To issue a Let's Encrypt certificate using Certbot, follow these steps:

1.   Install Certbot according to the instructions on the website / to the server.
2.   Execute the certificate issue command: certbot --nginx or certbot --apache. When launching for the first time, an email address for receiving information about problems site certificate updates and other alerts may be required.

Certbot will analyze the ServerName directive that corresponds to the domain name with the requested certificate in the web server’s configuration files. If you need to specify multiple domains or wildcard, use the command line key -d.

For more details, see:

8.2.2 How to Configure IIS for a Let's Encrypt Certificate

After executing the certbot command, the web server configuration will be updated automatically. The certbot client will display a successful completion message, and will also show the path to the directory where the certificates are stored.

9. Certificate Renewal for Linux and Windows

9.1 Paid Trusted

When extending the validity of the SSL/TLS certificate, creating a new CSR request is recommended. Generating a new request will create a new unique key pair (public/private) for the updated certificate.

The web interface of many SSL certificate providers allows you to renew the certificate manually or automatically. After renewing, the user will receive a new reissued certificate. This needs to be reconfigured again in accordance with the instructions above.

9.2 Self-Signed

Self-signed certificates are renewed by recreating and configuring the web server in accordance with the instructions described above.

9.3 Let’s Encrypt

9.3.1 On Windows

Windows ACME Simple creates a new rule in the Windows Task Scheduler (called win-acme-renew) to automatically renew the certificate. The task is started every day, and the certificate renewal itself is performed after 60 days. When extending, the scheduler runs the command:

C:\\<path to the WACS directory>\\wacs.exe --renew --baseuri "< >"

You can use the same command to manually update the certificate.

9.3.2 On Linux

To renew the certificate via certbot, you need to run the following command:

certbot Renew --force-Renewal

To specify a specific domain, use the -d parameter.

10. Testing

10.1 Services (SSL Checkers) that Allow You to Check SSL Tinctures on a Public Server

SSL verification is carried out using online services provided by Certification Centers, as well as third-party developers such as:

These services allow you to gain information about certificates, domains, organizations, cities, serial numbers, algorithms used, their parameters (such as key length) and details about the certificate chain.

10.2 Verification of the Entire Certificate Chain

The entire certificate chain is verified by SSL Shopper, Symantec SSL Toolbox and SSL Checker. The links are given above.

10.3 Checking on iOS (via a Special App)

To check certificates on iOS devices, install the SSL Checker app from the App Store. With this application, you can check the current status and validity of the SSL certificate of any server, including self-signed certificates. The application can detect changes in the certificate parameters and send notifications about it.

10.4 Checking on Android

To check certificates on Android devices, install the SSL Certificate Checker application from Google Play. Using this application, you can check the current status and attributes of the SSL certificate of any server, including the certificate chain.

A complete guide for SSL, TLS and certificates

Sep 29, 2022 — 3 min read

Professionally coordinated operational communications allow warfare to be done while preventing escalation and/or emergency scenarios.

In order to ensure the highest possible security of soldier communications on missions, to prevent espionage, and perhaps even to win the war, it is necessary to use a large number of military information and communication technologies. They should not only protect and provide communications for operational activities but also enable an exchange between military personnel at the "internal" and "secret" communication levels.

Nowadays, the use of mobile devices for communication has become so commonplace that it has even spread to the industries of military and defence. This scenario uses programs for communication, which are often called instant messengers.

The armies of different countries are looking for secure ways to exchange messages. Some are turning to already available commercial solutions, while others are developing their messengers with the help of the open-source community. Let's take a look at what messengers are used in the armies of the world, and what are their features, advantages, and disadvantages.


The US military leadership suggested the use of encrypted messengers Signal and Wickr in combat zones, including in the Middle East. Both were created by the open community and are available for free download.

Open Whisper Systems created Signal, which uses its proprietary Signal System encryption protocol, which is used by other messengers. Wickr has created a military-specific RAM version. End-to-end encryption is provided for chat messaging, audio and video conversations, secure screen sharing, and massive file transmission and storage.

The usage of Signal and Wickr, on the other hand, violates the US Freedom of Information Act, which states that email and text messages received in official government activities are public and must be made accessible upon request. At the same time, both messengers offer the ability to delete messages, which are not saved on senders' or receivers' devices or the company's servers.


European governments seeking digital sovereignty are developing messengers such as Matrix using decentralised messaging protocols. The adoption of such a protocol enables you to store data in the application developer's infrastructure. This software's messenger contains open source code, solid end-to-end encryption, and is decentralised. The Matrix-based system was developed by the French Armed Forces. In 2019, they developed the Tchap messenger to replace Telegram, which was previously used by local government departments for communication.


The German armed services also utilise the Matrix-based BwChat messenger for military communications. The messenger, developed with the help of the nation's Armed Forces' Cyber Innovation Center and Stashcat, provides a secure communication route not only while deployed in the country, but also when deployed overseas. Because of end-to-end encryption and mobile application management, it may be accessed from both professional and personal devices (MAM). In a safe communication environment that conforms with data security and GDPR, BwChat blends traditional chat features with cloud storage.

Each user gets their own file storage area and each conversation has its own storage space. User data is encrypted and handled in line with German data protection legislation in a server centre in Hannover.

The program allows for secure document sharing, an infinite number of conversation participants and the surveillance and organisation of movements using the "Share GPS Location" function. It is not dependent on the user's contacts list.


Except for the native messenger Threema, the Swiss Army prohibited all chat applications in 2022. The military can no longer use Signal, Telegram, or WhatsApp.

Because Threema does not require users to enter a phone number or email address when enrolling, no identity may be determined using publicly available information. At the same time, the messenger allows you to identify persons in your contact list by their QR codes.


In the year 2020, the IDF developed a messenger that was functionally equivalent to WhatsApp. It looks and operates just like WhatsApp, but it has additional privacy protections built in for sending extremely sensitive operational data, such as while conducting reconnaissance.


The Indian Army introduced its SAI texting app in 2021. It is comparable to commercial competitors WhatsApp, Telegram, and Signal in that it enables end-to-end encryption for voice, text messaging, and video conversations.

The Indian app offers enhanced security protections since all data is handled on local servers.


There is no open source information available on the Chinese military's usage of instant messengers. Furthermore, most major instant messengers and social networks, including Facebook, Instagram, Twitter, Whatsapp, Telegram, Viber, and even ICQ, are restricted in the nation. Local military troops are most likely using WeChat, the country's most popular and government-controlled messenger.

The safety and security of data are extremely important, particularly when it comes to the official communications of the military and in the institutions of the defence sector. Every officer in the armed forces is responsible for ensuring the safety and well-being of their colleagues. The success rate of successfully protecting secret and personal information is directly proportional to the level of security, encryption, and compartmentalization of the connection.

The secrets of military communication

Sep 19, 2022 — 5 min read

Quantum computers, which are very particular kinds of computers, are capable of solving problems in a very short amount of time, even those that would take a supercomputer a very long time to solve. It is true that doing these tasks is still a long way from being a reality, and quantum systems have many limitations. But, as we all know, progress is a never-ending process, and it is possible that in the not-too-distant future, this technology will rule the planet. Let's have a conversation about how this cutting-edge technology may impact our security.

Data encryption is the key to online security

Encryption is essential to the protection of data on computers and other Internets. Encrypting data entails utilizing a secret rule and a collection of characters, known as ‘the key’, to turn it into a meaningless jumble. To comprehend what the sender was trying to communicate, one will have to decode the ‘porridge’ using the key.

One of the most basic types of encryption is when each letter is replaced by a number (say, A - 1, B - 2, and so on). The word ‘breadfruit’ will appear like ‘2 1 15 2 1 2’ in this example, and the key will be the alphabet, where each letter corresponds to a number. Of course, more sophisticated rules are utilized, but the idea of the operation is roughly the same.

When there is only one key for all interlocutors, as in our case, such ciphers are referred to as symmetric. Before you may use a symmetric cipher, all interlocutors must acquire this key in order to encrypt and decode their own communications. Furthermore, it must be transmitted in an unencrypted format (after all, there is nothing to encrypt with yet). If you have to send the key over the Internet, attackers can intercept it and successfully spy on everything you've secured with it. This is not very practical.

As a result, there are encryption methods that employ two keys: private for decryption and public for message encryption (these keys are also called private and public). Both are the recipient's creations. He does not give the secret key to anyone, therefore he will not be intercepted.

The second, the public key, is intended to allow anybody to encrypt data with it, but it can only be decrypted with the accompanying private key. As a consequence, it is not frightening to send information in an unencrypted form — it poses no harm. This method of encryption is known as asymmetric.

Both the ‘lock’ and the ‘key’ (that is, public and private keys) in current encryption systems are typically huge integers, and the algorithms themselves are constructed on sophisticated mathematical processes using these numbers. Furthermore, operations must be designed in such a way that ‘turning them back’ is exceedingly difficult. As a result, having the public key will not assist someone attempting to break the cipher.

Quantum cipher cracking

What this means is that anything that is encrypted with a public key can only be decoded by its private ‘partner’, and no one else. This indicates that the private key is being investigated by potential adversaries. Because it is not sent to any location, as we have stated previously, it is not feasible to intercept it. However, in principle, one may get it from the general populace.

However, cryptographic methods are purposely intended to make it difficult to solve the challenge of acquiring a private key from a public key in a reasonable length of time. This is done by preventing the reverse engineering of public keys into private ones.

Quantum computers become useful at this point in the discussion. The simple truth is that, as a result of their design, they are capable of solving such problems significantly faster than conventional computers.

When a quantum computer is utilized, the unreasonably long amount of time needed to decipher a cipher can be reduced to a more manageable amount of time. And because of this, the very idea of utilizing a cipher that is susceptible to being broken by a quantum computer may be rendered moot in some theoretical sense.

The advent of quantum computers is imminent, and when they do arise, the world will be forever altered. Their introduction might completely change the way physics and medicine are practiced, not to mention the way that information is protected. So, how should we get ready for this?

Protection against quantum hacking

If the thought of your data being decoded and stolen by wealthy criminals using a quantum computer makes you cringe, don't worry: security experts are already planning for protection. Currently, there are some fundamental procedures in place that should protect user information from attackers.

Traditional encryption algorithms that are resistant to quantum attacks

It's difficult to believe, but we're already employing encryption methods that quantum computers can’t hack For example, despite quantum computers' increased speed, cracking the popular AES encryption used in instant messengers such as WhatsApp and Signal remains impossible. They do not pose a serious danger to many other symmetric (one-key) ciphers. we're back to the issue of distributing the key to all participants in the discourse.

Algorithms that are created expressly to thwart quantum assaults

Although no one is currently breaking asymmetric ciphers, mathematicians are actively developing new ones that are resistant to even the most powerful of quantum devices. So, by the time the bad guys get a hold of quantum computers, data defenders will almost certainly be able to strike back.

Encryption in several ways at once

Encrypting data numerous times with various methods is a useful and accessible approach. Even if attackers hack one one type of encryption, it is not guaranteed that they will be able to manage the rest.

Using quantum technologies against themselves

Quantum key transfer systems are employed for the secure usage of symmetric ciphers, which, as previously stated, are less resistant to cracking with the use of quantum computers. They do not ensure security from hackers, but they do allow you to be certain that the information was intercepted. If the encryption key is stolen along the route, you can refuse to accept it and send another one. True, this requires specialized technology, but it is very self-contained and operates in both government-backed and commercial businesses.

The entire internet will not be hacked

So, while quantum computers appear to be capable of breaking ciphers that regular computers cannot, they are not omnipotent. Furthermore, security measures are being created proactively to prevent attackers from gaining an advantage in the arms race.

So, the world's encryption is unlikely to fail all at once; rather, certain algorithms will ultimately be replaced by others, which is not always a negative thing. This is happening right now, because after all, technology is not a static industry.

As a result, it's occasionally worth investigating whatever encryption technique a certain service employs and whether the algorithm is regarded antiquated (susceptible to hacking). Indeed, supposing that the era of quantum computers has already arrived, it would be prudent to begin encrypting extremely important data meant for long-term preservation.

What quantum computers can change and how different services aim to safeguard your data as a result

Sep 9, 2022 — 4 min read

You want others to see your music video if you publish it on the internet. However, if your film is for corporate training, you don't want unauthorised persons to see your sensitive company information. Video encryption can keep unauthorised people from accessing your content.

Data breaches, illegal sharing and data theft are all dangers for modern businesses. To be competitive in today's world, you must rely on content security to keep your company's information safe and secure.

Encrypting your videos is one method of safeguarding your company's data. Once your video footage has been encrypted, you can safely share it with your employees, customers, partners, and prospects.

There are three ways to safeguard your videos from unwanted access: encrypting the video, protecting the video, or doing both.

What is encryption?

While encryption refers to the hiding or modifying of data, protection refers to safeguarding the file using codecs, passwords, container formats, and so on, all so that outsiders cannot access the data contained within.

However, for increased security, you can use both encryption and protection, which is the greatest solution for securing your content. In informal conversation, the term encryption can refer to encryption, protection, encoding, or any combination of the three. As a result, encryption in this context means securing your data in every way possible, which includes encryption as well as protection.

What exactly is video encryption?

The method of making your video secure from prying eyes is known as video encryption. Why should you encrypt your videos? There could be two explanations for this. The first is for personal use, and the second is for Digital Rights Management (DRM)

Personal encryption, as the name implies, is used to protect one's privacy. For example, suppose you make a film and want to share it with your relatives, mates, customers, and so on, but you don't want unwanted people to access it.

Digital Rights Management is similar in concept but more sophisticated. The various degrees of DRM are as follows:

– Software-centric video
– Adaptive streaming
– Qualitative and quantitative video streams for various price points
– Device or media-centric video
– Region-centric video

So, what exactly is the distinction between personal encryption and Digital Rights Management? Others are kept out of personal encryption except for the intended recipient. However, in the case of DRM, it either temporarily or permanently shuts people out, without human intervention and under certain conditions.

For various pricing ranges, qualitative and quantitative video feeds are available. – If you're willing to spend more money, you can obtain 4K, but if you want to save money, you'll have to settle for SD. It has an effect on the quality since it directly affects the resolution (physical data of the video stream). The higher the price, the higher the quality.

Region-centric video

Do you want to target a specific region? Perhaps you don't want the video to be seen in other areas or countries. The reasons for this type of DRM could be because you are legally prohibited from catering to other regions, or you want to influence market dynamics. In such cases, region-specific management is required.

Device or media-centric

This is done to prevent your material from playing on incompatible devices. You generate media that is limited to a specific device, such as iTunes, Kindle, or Apple TV. As such, those who do not adhere to the devices in question are unable to play it.

Software-centric video

To play some videos, you must have proper software support and/or pay a licence fee. Specific NLEs will not play certain codecs if the operating system does not support them or if the licence is not paid for. As a result, codec licensing is yet another method for controlling video consumption.

Adaptive streaming

During adaptive streaming, the video dynamically changes to the resolution and bit rate of the internet speed and/or other circumstances.

How are online videos kept secure?

The video is first encoded using conventional encryption and stored on a secure server. The video is not available for everyone to view. To view/access the movie, you must first log in to the server using a confirmed email address and password.

The video is delivered to the viewer's computer through a secure channel and may be watched using a browser that decrypts the video. The browser prevents unwanted access to other applications to see or record it. The browser also prevents the OS from storing the material on the viewer's PC. The secure connection ends as soon as the viewing is over. The data from the viewer is sent on to the content provider for targeted marketing and statistical study. You may also use this data to track down pilferages and leaks. And, if the video is accidentally downloaded, the encryption ensures that it does not play on the accessible media player.

How exactly does video encryption prevent piracy?

The "pirate" must have sufficient expertise to decrypt the encryption. To obtain a high-quality stream, pirates must pay up. And, of course, if you pay, the server will have the essential information about you.

To obtain an accessible format, the pirate must encode the encrypted stream using the software. The procedure either increases the file size or decreases the quality of the source. As the file size grows, the pirate must pay more money to transfer the data again. To detect correlations, cloud algorithms might use the uploaded material and compare it to the original stream.

Options for video encryption

When it comes to video encryption, there really are two scenarios: video at rest and video in action (streaming).

Video at rest

Here are some possibilities for videos that remain on hard drives or are downloaded to play later:

– AES encryption standard - 128, 192, or 256 bits
– Google Widevine
– Apple Fair Play for iTunes videos
– Marlin
– PMP (Protected Media Path) in Windows

Video in motion or video streaming

Some examples of video in motion or streaming video include:

– HTML5 DRM standard

The Advanced Encryption Standard (AES), which has been approved by the government of the United States and is currently utilised all over the world, is the encryption method that provides the highest level of safety.

How to secure your digital content using video encryption

Sep 8, 2022 — 1 min read

Running tasks in the background

A new mechanism for handling tasks allows you to run them in the background. For example, you can run an LDAP synchronization task and still work in Passwork. Your synchronization task will run in the background.

You can see scheduled and completed tasks on the “Tasks” page. Here you can also find the configuration instructions for your operating system.

Display a favicon in the password list

The Passwork interface has become even more user friendly and convenient. If a password has a URL, a website icon will be displayed next to its name.

Automatic favicon loading can be set up by administrators on the “Company settings” page. In this case background tasks should be set up.

Other changes:

  • Automatic session termination in the mobile app and Passwork extension when API key is changed
  • Removed white background in the dark theme when loading pages
  • Fixed bug displaying the results of an outdated search query
  • Improved validation of TOTP keys
  • Fixed empty messages in Syslog
  • Added login validation with UTF-8 encoding
  • Added automatic LDAP host swap :\\ → ://
  • Fixed errors in LDAP code related to the migration to PHP 8
  • Redesigned login and registration forms

If you are already using Passwork, update your version — How to update Passwork
Or request a free demo at

Introducing Passwork 5.1

Aug 30, 2022 — 4 min read

Nearly 20 years ago, the National Institute of Standards and Technology (NIST) established guidelines for secure passwords. Indeed, they are still used by many websites, portals, and other services. You’re likely familiar with these password requirements — there ought to be at least 8 characters, both capital and lowercase letters, digits, and special characters. Despite these guidelines, passwords that meet these requirements are no longer safe from modern attackers. The only thing any of us can do to improve the security of our accounts is to make sure that our passwords are lengthy, complicated, and unique for each account. Due to the strict password management requirements, this strategy is, nevertheless, laborious and intimidating for many.

The Same Password Rules Do Not Apply Today

In the modern day, password-based security is no longer seen as sufficient. Our digital world is continuously expanding, thus it is more important than ever to make sure that our data is safeguarded from cybercriminals. Cybercriminals perceive an opportunity to target people in a more sophisticated way as a result of the increasing usage of internet services. One explanation is that, although we benefit from technological improvement for our personal, social, or economic growth, cybercriminals have also benefited from the advantages of improved computer graphics cards and machine learning to enhance their attack strategies. In addition to the problem of more sophisticated cyberattacks, there are two interrelated problems with conventional password rules:

The first concern lies in our human nature — keeping track of passwords is tough

You may take a few steps as an individual to increase the security of your passwords. Start by lengthening and making your passwords more complicated. Second, create a unique password for each website you visit. The difficulty of remembering a password increases with its complexity. As a result, we frequently select passwords that are not entirely suitable yet are simple to remember. The difficulty of managing several complicated passwords for every online account leads to the frequent reuse of the same passwords across multiple platforms. As a result, a successful attacker immediately wins big.

However, the high level of password complexity necessary to maintain online safety should not be blamed; rather, it should be pointed out that we can’t improve our inadequate password management skills. Using a password manager to generate and store secure passwords is a useful solution. It is not humanly possible to manage strong passwords for all of our internet accounts without assistance, such as password managers. Because they can't recall the complicated, random sequences of letters, numbers, and special characters, the problem increases the likelihood that individuals will write down their passwords. Passwords are left exposed in digital files stored on a computer or in desk-top notes, making it simple for hackers to hack and read passwords.

The second problem is that passwords have a mathematical limit

There are only ever a finite amount of potential password combinations since a password is a mix of letters, numbers, and symbols. As a result, the best technique for breaking passwords is brute force attacks. Until the correct combination is identified and the password is broken, brute force attacks attempt all possible combinations of letters, numbers, and symbols. Theoretically, a stronger password would be one that is harder to guess due to its length, complexity, and number of possible permutations. However, attackers are now substantially more frequently exploiting Graphic Processing Units (GPUs) to break passwords. GPUs are a component of a computer's graphics card and were first designed to speed up the loading of images and movies. They now show promise for computing hashes (the method used in brute force attacks).

According to studies on password cracking times, passwords may be cracked much more quickly using sophisticated computer graphics cards. Using the most recent computer graphic cards, an 8-character password that used to take 8 hours to crack in 2018 now only takes 39 minutes (see the conclusive 2022 results in the table below). Passwords are gradually getting simpler to crack as a result of recent technical developments, which is a concerning trend. More crucial, however, is the fact that if a password has already been stolen, repeated across sites, or contains basic phrases, attackers may access your accounts right away, regardless of the complexity of the password or the attacker's graphics card.

Consider a 4-character password made up of all 26 letters in the Latin alphabet (case-insensitive) in order to visualise this mathematical example.

26^4 = 456,976 possible password combinations

The number of viable choices rises to when you include digits, uppercase and lowercase letters, and special characters.

95^4 = 81,450,625 possible password combinations

However, because the password must contain at least one special character, one number, one capital letter, and one lowercase letter, the quantity drops to

5,353,920 possible password combinations.

Nevertheless, assuming there are no password-entry security measures, this can be cracked in less than a second by a computer (such as automatic account blocking).

Increase the length and complexity of passwords

Longer or more complicated password phrases are strongly advised when creating new passwords. In this manner, potential attackers will have a harder time breaking the codes. It's crucial to take into account the popularity of the selected password combination in addition to the amount of alternative password combinations. For instance, lists of frequently used passwords or phrases, such as "qwerty," "password," or "12345," are frequently used in brute force assaults.

Therefore, the password should be completely unique or not contain any words at all. For instance, one technique would be to employ acronyms or mnemonics, such as generating a password out of the first few characters of a long text. As an illustration, consider making the password ‘Ilts@7S!’ out of the words I love to ski at Seven Springs.

Password length and complexity alone are insufficient

We are aware that adding length and complexity to passwords is the only method to increase their strength and, consequently, the safety of our accounts. The time it typically takes an attacker to break a password in 2022 using a powerful commercial computer is displayed below. This chart, which has been analysed and periodically updated since 2018, shows how quickly passwords can be broken on current machines. This pattern indicates that, despite our best efforts to create passwords that are longer and more complicated, passwords alone are no longer sufficient to meet the required internet security standards.

In conclusion, password rules increase the complexity of passwords without necessarily enhancing their security. The answer to that is to use no passwords at all. However, we’ll discuss that in part 2 of this article.

Why your passwords are no longer secure (Part 1)

Aug 23, 2022 — 4 min read

These days, locking your door and protecting your WiFi network are almost the same in terms of their importance. Without any protection, hackers may access your network and your personal information, such as your bank data, via any of your connected devices, including your video doorbell. Modern WiFi routers utilise security protocols and encryption technologies to mask your sensitive data so that you can protect yourself. In order to select the proper security settings for your WiFi network, you must understand the differences between WEP, WPA, WPA2, and WPA3.

WiFi Encryption: What is it?

Nowadays, the majority of WiFi routers encrypt all the data sent by your connected devices, including your computer, smartphone, or smart home appliances. As a result, nobody else will be able to access any of your personal information without the decryption key since it will convert all of your data into "cipher text."

An encryption protocol can be compared to a combination lock. To unlock your data and decode it into plain text, you need the appropriate combination.

The WiFi Alliance, a nonprofit organisation that holds the Wi-Fi trademark, has certified all WiFi protocols. There have been four distinct encryption systems throughout the years: WEP, WPA, WPA2, and most recently, WPA3.


The WiFi Alliance approved the first wireless security standard, called WEP (Wired Equivalent Privacy), in 1999. The WEP standard was initially intended to offer security that was comparable to that of a wired connection, however several security holes were found over time.

WEP really offers "little to no protection, since WEP can be broken using publicly accessible tools," according to the FBI. The WiFi Alliance formally abolished WEP in 2004 as a result of these security concerns. It’s crucial to remember that utilising WEP is still preferable to employing no security mechanisms at all.


The WPA (WiFi Protected Access) standard was introduced in 2003 as a stopgap measure to take the place of the WEP standard. WPA employs Temporal Key Integrity Protocol (TKIP) to dynamically produce a different key for each data packet delivered, in contrast to WEP, which uses the same key for each authorised system.

The WiFi Alliance deprecated the WPA protocol in 2015 because it "no longer provides the level of security required to safeguard consumer or corporate WiFi networks" due to a range of uncovered security weaknesses.


A streamlined WiFi security protocol called WPA-PSK (Pre-Shared Key) was created for residential networks. Similar to WEP, it employs a static key to make things simpler, but the key automatically changes on a regular basis to stop hackers from breaking into your network.


When we compare WPA with WPA2 (WiFi Protected Access Version 2), we can observe a substantial increase in security. WPA2, which was introduced in 2006, is identical to WPA but replaces TKIP with the more powerful Advanced Encryption System (AES).

The US government uses the same encryption standard, AES, to safeguard secret materials. With WPA-AES, relatively few security holes have been found, and the majority of them may be avoided through the use of a strong password.

Since WPA2 certification became required in 2006, any router produced after that year must support WPA2. When you connect an older device, WPA2 routers will still default to WEP, so be careful to turn off WEP on your router to close these security gaps.


In 2018, the WiFi Alliance certified WPA3, the newest WiFi security technology. WPA3, the most recent network security protocol, enhances the security characteristics of WPA2 by introducing new ones.

For instance, WPA3 verifies authentication via a "handshake" between your network and any of your wireless devices. A gadget only allows someone to guess the WiFi password once if it is offline. This safeguard makes sure that the user must be able to view your router directly.

Even if WPA3-certified items are becoming more widely available, not everyone will have access to them. If your router is outdated, you may need to replace it or wait in the hope that your manufacturer releases an update that enables WPA3 usage.

What WiFi security protocol is the best?

The WiFi Alliance advises using WPA3 as your wireless security protocol if your router is compatible with it. However, if you have older devices connected to your network since WPA3 is still so new, you might need to utilise WPA2.

  • WPA3-Personal: The best security setting for home WiFi networks
  • WPA3-Enterprise: The best security setting for businesses
  • WPA2 (AES): The second-best security setting, available on more routers
  • WPA/WPA2-PSK (TKIP/AES): The best security setting for networks with older devices because it enables you to use both WPA and WPA2, but it is not available on most routers
  • WPA2-PSK (TKIP): Still usable, but it only provides you with minimal security
  • WPA-PSK (AES): An updated version of WPA that replaces TKIP with AES, but you should only use this setting if there are no better options available
  • WPA-PSK (TKIP): No longer considered secure
  • WEP 128: Risky
  • WEP 64: Highly risky, but better than having no security
  • Open network: No security at all

When you get a new WiFi router, the first thing you should do is create a strong, unique password for your WiFi network. The WiFi Alliance recommends that you use a password that is at least 8 characters long and contains letters, numbers, and special characters.

After you create a password, the WiFi Alliance also suggests that you change it at least once a year. You should also change your router’s login credentials, install an antivirus program, and update your router’s firmware.

What is WEP, WPA, WPA2 and WPA3?

Aug 15, 2022 — 5 min read

With iCloud, you can recover data from any iOS device in just seven steps.

Although Apple products are known for their high performance and durability, problems with your iPhone, iPad, or Apple Watch can arise at any time. Fortunately, backing up Apple devices to iCloud is simple. However, just like with the best data recovery tools, you'll need to know how to restore a backup from iCloud in case something goes wrong.

We have thus provided these seven simple steps to help you reset your iOS device using an iCloud backup. Although iCloud is one of the better cloud storage options as one can open it in a new tab, it has one significant drawback: you must wipe your device completely before uploading a backup. If you require a fix for this problem, skip to step 7 of this article.

How to restore an iCloud backup: Setting up

Before restoring a device, you must configure iCloud's backup feature because there won't be anything to restore otherwise.

You must configure the iCloud backup before you may restore your iOS device from a backup. The best time to do this is when you first set up your device, but you can do it whenever you choose.

Go to the Settings app and tap on your name at the top to get started. Then, select "iCloud" and then "Backup" from the list. Ensure that "Backup" is turned on. iCloud will automatically back up your data when your device is locked, plugged in, and connected to WiFi after backup has been enabled.

To manually initiate an iCloud backup, go to Settings > iCloud > Backup and press "Back Up Now." Your Apple device can always be reset to the most recent iCloud backup if you encounter a technical problem or need to recover lost data.

Remember that the complimentary iCloud account that comes with your Apple ID only offers 5GB of storage space. The majority of Apple’s products have much more internal capacity than that. For instance, the iPhone 13 has at least 128GB of internal storage.

Consider eliminating any unnecessary files or upgrading to iCloud+, which starts at $0.99 per month for 50GB of storage, if you try to backup your smartphone to iCloud and discover that your iCloud storage is full.

Step 1: Get ready for a factory reset on your device

You must carry out a factory reset prior to restoring your device from a backup using the standard Apple procedure. This implies that you must delete all of the content that is currently on your device. You can work around this by utilising third-party software if you don't want to perform a reset. To learn how to restore from a backup using third-party software, skip to step 7.

Examine your notes, files, images, and any other apps you think may contain crucial information. After you complete the reset, anything that was added since your most recent backup will be irretrievably gone.

Step 1b (Optional): Disconnect your gadget (Apple Watch only)

Resetting an Apple Watch entails an extra step.

You must unpair your Apple Watch from your iPhone as a separate step before moving on to step 2 if you're resetting an Apple Watch.

Open the Apple Watch app on your iPhone and go to My Watch > All Watches to get started. To unpair an Apple Watch, tap the details button next to the watch you wish to do so. The system will prompt you to decide whether to keep or cancel your mobile plan. Keep it, because you will soon restore it from a backup.

Before continuing to the next step, tap once more to confirm, then enter your Apple ID password to finish the unpairing procedure.

Step 2: Reset your device

Go to Settings > General > Transfer or Reset [Device] once you are certain that nothing crucial will be lost. To start the factory reset, tap "Erase All Content and Settings" after that. You will now be required to enter your Apple ID password or device passcode.

Wait for the reset to finish after entering the passcode. Depending on how much stuff is already on your device, this can take a while. When you see the ‘Hello’ screen from when you first set up your iOS device, you will know the reset was completed.

Step 3: Configure and turn on your gadget

After a reset, you'll need to perform an initial installation once more.

You will need to go through the basic setup procedures the same way you did when you originally acquired your device because your iOS installation is now essentially brand new. Tap the ‘Hello’ screen to get started, then select your language.

To configure your device and connect it to the internet via WiFi or cellular data, simply follow the onscreen instructions. Set up your passcode, Face ID, and Touch ID lastly. Not all Apple devices will have all of these functions, so keep that in mind. You are currently prepared to restore your iCloud backup.

Step 4: Restore iCloud

You will have a number of options to restore your data on the following screen. "Restore from iCloud Backup" is the first choice; tap it. iCloud will now ask you to log in with your Apple ID.

You will get a list of available backups after logging in. Unless you want to backdate your device to a certain day and time, pick the most recent. iOS might be telling you that you need to execute an upgrade right now. If this happens, let the update finish installing before trying to restore your device.

Your files, notes, and photographs will all be restored at this time. Restoring your apps is the subsequent step.

Step 5: Restore your apps

Once you're logged in, restoring previously purchased apps is simple:

Log in with your Apple ID to recover apps that have been purchased. While your device downloads all of the apps linked to that ID, stay connected to WiFi. If you have several Apple IDs, sign into each one separately and wait for the corresponding apps to download.

Depending on how many apps you have, this stage may take some time, so be prepared to wait.

Step 6: Finish the setup procedure

There are a few last-minute adjustments to do before your device is ready for use, once you have finished restoring your data and applications. To continue configuring your iOS device, follow the on-screen instructions.

You will be prompted to choose whether you want iOS upgrades to launch automatically or manually as well as if you want to share data with Apple for development purposes.

Additionally, you'll be prompted to configure default features like Screen Time, Apple Pay, and Siri. Once you've finished, a big congratulations is in order! Your iOS device has been fully restored from an iCloud backup.

Step 7 (Optional): Restore your smartphone using third-party software without performing a reset

Using an iCloud backup to restore your iOS device can be a laborious and time-consuming operation. It can take hours to perform a factory reset, download the backup, download your apps again, and, possibly, re-update iOS.

Going through the entire reset and restore process can be a major inconvenience if you just lost a tiny amount of data, such as a single image or a few texts. Fortunately, certain third-party applications, like EaseUS and MobiMover, let you selectively restore a small amount of data from an iCloud backup file without performing a complete reset.

Download the reset program of your choice to get started. Keep in mind that the majority of third-party reset software is not free, but it does provide a free trial that allows only you to download a certain amount of data. If this is an isolated incident, you can recover a few files using the trial at no cost.

How to recover an iCloud backup: Synopsis

You now understand how to backup an iOS device to iCloud and restore your device using that backup. One of the numerous advantages of this robust cloud storage service is the ability to use iCloud to backup your gadgets.

How to recover an iCloud backup

Aug 5, 2022 — 5 min read

Every day, people all over the world are spending more and more of their waking hours online. In addition, we're increasingly using our mobile devices for much of our internet activity. The banking industry is unquestionably following suit.

More than seventy percent of Americans conduct some or all of their banking transactions online. Mobile devices now account for more than half of all website traffic, and financial institutions are not far behind.

How safe is mobile banking?

Of course, popular things aren't always safe. Passwords are a prime example. Convenience is a major factor in the migration to the online world and mobile banking. Many people simply accept the new reality without weighing up the pros and cons.

In this article, we'll look at the dangers of mobile banking and what you can do to keep your information safe.

Is mobile banking security at risk?

The most secure method when it comes to banking is, without a doubt, in-person endeavours. But, even if you are paranoid about being hacked, you shouldn't abandon convenience because of it. You've already taken the first step in safeguarding yourself if you're aware of the dangers of online and mobile banking, but don’t be paranoid.

If you're using a web browser on your PC or a mobile banking app on your phone, you face the same basic hazards. However, the vulnerabilities of various devices vary. Moreover, different apps necessitate the possession of a diverse range of hacking capabilities.

People who are increasingly relying on their smartphones rather than PCs may want to consider the following:

When it comes to security, is online banking more reliable than mobile banking?

Indeed, this is a valid question. The more detailed response necessitates additional thought. Traditional PCs are still the primary target of most viruses. Hackers must focus on more precise targets than just a web browser in order to target mobile malware, which is rare.

Now that's wonderful news, right? The bad news is that research conducted by security experts on mobile banking apps has revealed that nearly all of them contain at least one vulnerability. The problem is that these are rarely high-level flaws, however, if you're careful, you can escape a lot of trouble.

Keeping your device safe is an important part of exercising caution. While your desktop computer is likely to stay put, your phone is more likely to follow you wherever you go. As a result, it's more likely to end up in the wrong hands. If you’re new to the smartphone era, this is an issue. It's still not a good reason to give up on mobile banking, however.

The best ways to keep your online banking information safe

Are you still unsure of what those safeguards are? We've compiled a list of our top five picks. All except the most serious threats should be covered by these measures. If you're using mobile networks or your home internet, these tools should enable you to keep your activity safe at all times.

Use a VPN

In order to protect your mobile banking, you should use a virtual private network (VPN). Hackers can't see what you're doing if you hide your IP address and avoid internet tracking.

It doesn't matter if you're using public Wi-Fi or not. Public networks, by themselves, are extremely unsafe. A top-rated VPN like ExpressVPN or NordVPN, on the other hand, bring the security of your home internet connection with you wherever and whenever you travel. You can't link your phone data to your online banking activity because there is a virtual barrier between them.

If you’re not willing to sacrifice a bit of extra time for added security, this isn't for you. The convenience of using online banking can be compromised as a result. If you're using a VPN, your bank won’t know that you're trying to get into your own account. There will be an additional stage in the verification of your identity because of this.

Keep your devices safe!

Security risks in online banking aren't always posed by external sources such as the internet. Defending against direct device breaches is the initial step. Keeping your phone in a familiar location and making sure it's safe even if it gets lost is therefore a must!

In other words, the best way to unlock your home screen is by using a pin or facial recognition software. If you don't want apps and websites to save your passwords, you should log out of them and tell them not to do so. You'll have more time to notify your bank if your phone is stolen if you have more security measures in place.

Use only long, complex passwords

The majority of websites demand that you choose a complex password when you set up or update your account. However, you should be aware of the following guidelines:

  • Make use of both capital and lowercase letters, digits, and other characters to enhance complexity;
  • Never use the same password on more than one website, and make sure it's difficult to decipher. A strong password can be generated with the aid of specialised software.

As a result, most individuals wonder, "How am I supposed to remember so many strong passwords?" We're not counting on you to become a walking, talking, thinking machine. There is a common misconception that you should never write down your passwords. Passwords should be kept secure and separate from the devices on which they are used.

Keeping your online banking password in a separate location from your phone is the best way to keep it safe. Do not reveal what this location is used for.

Installing a password manager, on the other hand, allows you to store unique passwords for each website you visit. After that, all you have to remember is one secure password and the manager may log in on your behalf to all of your other accounts. And remember, that’s what we offer at Passwork.

Check your bank's security practices

Your bank's website should have instructions on how to keep your personal data safe. We strongly suggest that you take the time to read it. Even if you don't comprehend all that they say, you should be able to get an idea of whether or not their methods are secure.

The padlock icon, which indicates that the website is correctly encrypted, is one of the most obvious things to look for. Two-factor authentication is another option that can be used. Even if you don't feel the need for it, you should turn it on just in case. Each time you log in, you must either answer a security question or provide a one-time security code.

Your bank's dedication to security is demonstrated by measures such as these. It's also an indication of how safe their applications are. If your bank doesn't prioritise security, it's time to find a new one. It's likely that a bank that doesn't care about customer service isn't concerned about security either.

Recognize scams and phishing attempts, and avoid them at all costs

No matter how secure your bank is, they will never ask for your account information. This is almost probably a phishing attempt if you receive such a message.

In order to deceive people into disclosing personal information, hackers frequently send emails that appear to be from legitimate organisations. Fake websites may be used to trick you into clicking on dangerous links.

You’re sure to fall for this kind of scam if you don’t know what to look for. However, you can easily avoid it by teaching yourself to be sceptical of all unsolicited texts. Any notification you receive from your bank should be checked against the bank's website if you are unsure about it.


The better informed you are about internet safety, the better off you'll be in the long run.

Precautions like using a secure VPN and using strong passwords will help you stay safe online while also teaching you how to spot potential risks. That's why ExpressVPN is our top recommendation for online banking security.

Using our advice, you should be able to begin using mobile banking safely. You'll soon become used to the convenience of mobile banking if you're vigilant.

Is mobile banking safe? Top 5 safety tips

Jul 29, 2022 — 4 min read

In order to keep its customers' devices safe, both Apple and Android employ a variety of safeguards. A group of IT security specialists from around the world looked at the effectiveness of these tools, and that’s what we’re going to be discussing today.

Indeed, IT security researchers from Germany and the US conducted a study into how mobile phone users pick their PINs and how they may be persuaded to choose a more secure number combination. According to the researchers, six-digit PINs are no more secure than four-digit ones in terms of protection. Apple's usage of a "blacklist" to keep track of frequent PINs might be improved, and it would make more sense to deploy one on Android devices as well, they found.

Dr. Maximilian Golla of the Max Planck Institute for Security and Privacy in Bochum and Professor Adam Aviv of the George Washington University in the United States collaborated on the study with Philipp Markert, Daniel Bailey, and Professor Markus Dürmuth from the Horst Görtz Institute for IT Security at Ruhr-Universität Bochum. The findings will be presented at the IEEE Symposium on Security and Privacy in San Francisco in May 2020, according to the researchers. The paper's preprint may be downloaded at

What do users really need?

In the study, researchers had participants create either four- or six-digit PINs on Apple and Android smartphones and then analysed how simple it was to guess them afterwards. It was considered that the assailant had no idea who the victim was or cared about unlocking his or her phone. As a result, the most effective method of attack is to start with the most likely PINs.

PINs might be chosen at random by some research participants. Only PINs that were not on a blacklist were available to the rest of the population. One of the PINs that had been banned had a warning that this combination of digits was simple to guess.

IT security specialists utilised a variety of common passcode blocklists in the experiment, including the official list from Apple. The experiment involved a machine that tested all conceivable PIN combinations on an iPhone. The specialists also compiled their own lists which were tested too.

Is there any benefit in using a six-digit PIN over a four-digit PIN?

Six-digit PINs have been shown to be no more secure than four-digit ones. As Philipp Markert explains, "Mathematically speaking, of course, there is a tremendous difference." Ten thousand four-digit PINs and one million six-digit PINs may be generated, respectively. Philipp Markert also notes that consumers favour particular combinations of PINs, such as 123456 and 654321. This implies that the six-digit codes are not utilised to their full capacity by consumers. PIN security is something people don't seem to grasp instinctively, according to Markus Dürmuth.

Manufacturers restrict the amount of PIN entry tries, thus, a well-chosen four-digit PIN is safe. After 10 unsuccessful attempts to enter the pass code, Apple permanently locks the device. On an Android phone, several codes cannot be input in rapid succession. Philipp Markert points out that "in eleven hours, 100 number combinations may be examined."

Do blocklists matter?

Researchers discovered 274 four-digit PINs that were on Apple’s blocklist. This list is used as a mechanism for improving PIN selection, as Apple iOS users are shown the warning "This PIN Can Be Easily Guessed" with a choice to "Use Anyway" or "Change PIN." It’s effectively a list of very easily-guessed pins. Maximilian Golla says, "Since iPhone users only have 10 chances to guess the PIN, the blocklist does not make it any more secure." Using a blocklist for Android devices would make more sense, according to the researchers, because attackers may test out a wider range of PINs.

According to the study, the optimum blocklist for four-digit PINs should contain around 1,000 entries and varies somewhat from the list now utilised by Apple. Four-digit PINs like 1234, 0000, 2580 (the numbers show vertically below each other on the numeric keypad), 1111, and 5555 were found to be the most popular.

Now, iPhone users can choose to disregard the alert that they have entered a commonly used PIN, as we have seen. Because of this, the device does not reliably prevent entries on the blacklist from being chosen. The IT security professionals also took a closer look at this element as part of their research. It was up to the individual test participants to decide whether or not to enter a new PIN after receiving the warning. Those who were not on the list had to create a new PIN for themselves. Both groups' PINs were equally difficult to guess on average.

Pattern locks are less secure

Four and six-digit PINs were shown to be more secure than pattern locks, but not as safe as passwords.

The simpler the pattern is, the easier it is for lurkers to copy it, if they are watching over your shoulder. In fact, research found that lurkers were successful in recreating the swipe pattern 64.2% of the time after looking at it once. Of course, with multiple observations, that success rate rises.

According to the study, these are the most frequently used PINs:

  • Four-digit PINs of the following kinds: 1234, 0000, 2580, 1111, 5555, 5683, 0852, 2222, 1212
  • Six-digit PINs of the following kinds: 123456, 654321, 111111, 000000, 123123, 666666, 121212, 112233, 789456, 159753

So, don’t forget to double check that your PIN is not on the list above. If you’re interested in evaluating your password security, we strongly recommend checking them against the password checker.

This tool checks users’ passwords against a database of common weak passwords. It evaluates each password based on key factors such as:

  • Its number of characters. The password should have at least eight to 10 characters, but 16 to 20 characters is ideal.
  • Combinations. The password should include a combination of letters, numbers, and symbols rather than taking the form of a phrase. Each character has an associated numerical value, and these characters are summed to create a grand total.
  • Uniqueness. The password shouldn’t be repetitive in terms of its characters, with unique combinations used instead.

Is it safe to use a four- or six-digit PIN on a mobile phone?

Jul 21, 2022 — 4 min read

Backing up data is critical to ensuring system integrity, but if done incorrectly, it can exacerbate already-existing security issues. Fortunately, there are a number of best practices that can be followed.

In order to keep your data safe and secure, you need to have regular data backups. However, these backups are often the source of many security problems. In fact, a large number of security breaches can be traced back to the mismanagement of data backups. A lack of adequate data backup controls is evident in the headlines and security surveys that are published. There's nothing wrong with using best practices when developing an enterprise data backup strategy.

Millions of sensitive business records have been compromised in backup-related mistakes over the last few years, according to recent reports. Indeed, these are just the incidents that have been publicly reported. Confidential information, including intellectual property, is no less vulnerable to data backup-related breaches than other types of sensitive data. Without a solid back-up plan in place when things go wrong, security is the first thing on the shark’s menu.

As long as there is a process for replicating sensitive data, many storage professionals believe that their organisation is safe. However, this is only half of the battle. A new set of dangers arises from what can be done with data backups, which are often overlooked. Because of this, it is essential to incorporate secure data backup guidelines into the overall enterprise information security strategy.

Here are 10 ways to keep your data backups safe and secure from threats like ransomware, malicious insiders, and external hackers, both locally and in the cloud:

Make sure you have a backup plan in place

It is important to make sure your security policies include backup systems. Access control, system monitoring, and malware protection all have a direct impact on data backups.

Incorporate backup systems into your disaster recovery plan

Your disaster recovery and incident response plans should include a backup of your computer files and other important information. A ransomware outbreak, an employee break-in, or an environmental event such as a flood or hurricane can all compromise or destroy a company's data backups. If you don't have a plan in place for what to do if and when the time comes, your backups could be harmed.

In order to protect data backups, restrict access to them

Only those who need to be involved in the backup process should be given access rights. Software and data backups are no exception here either. Systems that provide backup access, whether on-premises or in the cloud, should not be undervalued.

Consider a variety of backup options

Keep your backups in a different location, such as a different building. Your data centre and your backups could be wiped out in one fell swoop if a natural disaster, a fire, or some other rare, but impactful, incident occurs.

Protect data backups from unauthorised access

Backing up to NAS, external hard drives, or tapes is fine as long as access to those locations can be tightly controlled. Your backup files are just as important as your computer's hard drive. SOC audit reports, independent security assessments, or your own investigations may be able to confirm this.

Ensure the safety of all backup media devices

Some backups are still kept on portable drives, tapes, and other media, despite the widespread use of hard disks and solid-state drives. Fireproof and media-rated safes should be used in these situations. One of the most common places to keep backups is in a “fireproof,” but paper-only safe. A standard fireproof safe only serves to provide a false sense of security for backup media such as tapes, optical disks, and magnetic drives, which have lower burning/melting points than paper.

Check out the security measures in place for your vendors

Find out what security measures your data centre, cloud, and courier service providers are using to keep backups safe. Despite the fact that lawyers appreciate well-drafted contracts, they are not always sufficient. As a fallback measure, contracts can help protect sensitive data, but they won't stop it from being exposed in the first place. Check to see if security measures are in place as part of vendor management initiatives.

Ensure the security of your network

Backups should be stored on a separate file system or cloud storage service that is located on a separate network. Ransomware-related risks can be minimized by using unique login credentials that are not part of the enterprise directory service. The use of two-factor authentication can increase the security of your backup system.

Encrypt backups as a top priority

Whenever possible, encrypt your backups. The same is true for backup media and files, which must be encrypted with strong passphrases or other centrally managed encryption technology if they are to be taken outside the premises at any point in time. Encryption is an excellent final layer of defence when implemented and managed correctly. It's reassuring to know that even if your backups are corrupted or destroyed, you won't be able to access them. This is especially useful when it comes to meeting compliance and notification requirements in the event of a data breach.

Ensure that all of your data is backed up and tested frequently

These data backup flaws are likely to exist within your business. Before you're hit by a ransomware attack or data destruction, it's a good idea to find out where your vulnerabilities lie. Hire an unbiased third party to find the holes in your data backup processes and systems on a regular basis or look for them yourself. In the end, it's the little issues that aren't so obvious at first that can be the most difficult to deal with.

What you need to know about protecting your data backups

Jul 21, 2022 — 4 min read

As a kid, I was enthralled by science fiction films like 2001: A Space Odyssey, The Fifth Element, and Minority Report, hoping that the wonderful technology shown in these films — facial recognition, artificial intelligence, gesture controls, and flying cars — would one day become a reality. Today, we have access to most of these technologies, with the exception of flying automobiles, owing to the likes of Apple.

One of these emerging technologies is facial recognition. Face ID, Apple's replacement for Touch ID's fingerprint sensor, has been available to consumers since the release of the iPhone X. However, how secure is Face ID when we compare it to Touch ID, despite how nice, convenient, and futuristic it seems? What additional security or privacy concerns does it raise?

It's important to keep two things in mind while evaluating the security and effectiveness of various forms of authentication, including biometrics:

  • An attacker can guess, duplicate, steal or fake the authentication factor with relative ease;
  • It is not as secure as two-factor authentication!

And here’s why…

The inner workings of Face ID

Facial-recognition systems have always been weak authentication points because they were either simple to fool or highly sensitive to ambient conditions.

In addition to detecting movements in 2D video, Face ID uses a method called ‘structured light’ to map out 3D scenes. Taking this further, "TrueDepth" uses a structured IR light (30,000 dots) to create a 3D representation of your face by measuring the depth of various spots.

Now, this increases the identification accuracy and safety of Face ID dramatically. An old-fashioned photo or video will not mislead a 3D facial scanner, unlike in the past.

For Face ID to work, Apple advises you to stare straight into the phone’s camera. This means that the system is also looking for any movement of the eye or the pupil. The skin and texture cues seen in certain facial-recognition algorithms can also help increase recognition accuracy. But, this is not how Face ID works.

There is no such thing as impenetrable technology. When researchers used publicly available photos and the technology of photogrammetry, they were able to generate 3D representations of a person's face that were quite realistic (specifically, stereophotogrammetry). We shouldn't be surprised if researchers and attackers uncover additional ways to fool Apple's Face ID mechanism in the future.

In spite of all the Face ID joke memes and the botched log-in at Apple's launch event, I feel that Face ID has been built quite effectively. This facial-recognition system appears to be more secure than many others because of its underlying technology, I believe. Even a 3D-printed face isn't enough to pose a threat, you’d need to put in a tremendous amount of work to do so.

A digital copy of your face

The digital form of an authentication factor is a second security risk for authentication systems. To put it another way, can an attacker obtain a digital replica of your login credentials and log in as you?

As far as the numbers are concerned, Apple appears to have done an excellent job of protecting this information on paper. In Apple's words, the model of your face is never saved outside of your iPhone X. No network or cloud storage is used for this data. On an iPhone, a "secure enclave" is where the Face ID data is saved, much like how your Touch ID fingerprints are stored.

Security and cryptography operations are handled by a distinct processor in Apple's newest SOC processors, the secure enclave processor (SEP). This processor is separate from the main processor and runs on its own operating system.

It is possible to store a digital key (such as a Face ID model) in your phone's SEP, but the main CPU does not view or manage it. Only the "outcomes" of the key's activities are received. Your face isn't shown to the operating system; it just receives a "matched" or "not matched" response from an encrypted area of your device. Simply put, Apple has created a method that makes it extremely difficult for attackers to get your Face ID data.

Is it enough?

Researchers and hackers will eventually find a way to get around Face ID's security measures. When it comes down to it, no one form of verification can ever be completely secure. We can use something we already have (passwords) or something we know (tokens or certificates) to authenticate (biometrics). The concern is that these tokens may be stolen, guessed, or replicated in a variety of ways.

Biometrics, such as Touch ID and Face ID, have grown increasingly popular since they are considerably easier to use than passwords and provide a reasonable level of protection. A lengthy series of random characters and numbers is simply too difficult for the ordinary human to recall.

But we're falling into the same trap, as well. All authentication methods have flaws, including biometrics. We will one day learn that biometrics like Face ID are no better than passwords.

That's why multifactor authentication is the only option that is genuinely safe. We need to combine two or more parameters instead of using them on their own. Someone could definitely make a convincing clone of your face with enough time and effort, but what if your phone or bank account demanded that you log in with both your face and your password? That would make it a million times more difficult to decipher.

It's time to stop arguing over which authentication method is more secure: Face ID vs. Touch ID; certificates vs. passwords; or a combination of both. Face ID is a solid piece of hardware, but it's vulnerable to hacking if you don't use it in conjunction with anything else.

How secure is Apple's Face ID?

Jul 8, 2022 — 3 min read

The storage of data is the single most significant factor to consider when it comes to the safety of mobile devices. It's true that malware and viruses are capable of infecting operating systems, which will require you to spend time and effort wiping out and reinstalling everything following a security breach. Another possibility is that actual computers could be taken, leaving you with the burden of replacing the system along with the associated costs and hassles. However, the actual worth of practically every company's digital cache is its data, which includes personal details, trade secrets, confidential information, and private chats; the chance that this data will fall into the wrong hands significantly outweighs any other issue regarding mobile security.

It can be tough to keep your data secure on all fronts, but solid-state drives (SSDs), which have intrinsic performance advantages, can make this work easier and more efficient by supporting encryption. SSDs also have other inherent advantages.

Why is encryption necessary to ensure the safety of data?

Encryption is the most important component of a secure storage system. Many businesses operate under the assumption that a device containing sensitive data would, at some point, be misplaced or stolen. The answer, then, is not to concentrate all of your energy on keeping track of physical devices or the components of their drives; rather, the thing that is most important is to preserve the real data that is stored on them. In fact, the cost of lost data or data that has been compromised might be significantly higher than the cost of a lost machine.

The process of hiding information by putting it through a series of complicated mathematical operations is referred to as encryption. After encryption, reverting back to the earlier version of the data and undoing the process that was just performed requires the use of a coded phrase known as a ‘key’. Therefore, even if the storage device that contains the data is misplaced or stolen, the data will still be unreadable; at least, it will be unreadable without the key.

There is unquestionably a great deal more to the mathematics behind encryption, such as the number of times the encryption scheme is run, the length of the key, and a variety of other considerations. The more complicated an encryption method is, the more difficult it is to read and write data as well as use the computer's processing power. This might cause the processing speed of the computer to become increasingly sluggish. This is where solid-state drives (SSDs) shine thanks to their naturally faster performance. Calculations necessary for the encryption and decryption process can take place significantly more quickly when the data can be written to or read from the drive at a higher rate.

Encryption and solid-state drives: safety and speed

There are two approaches that can be taken to accomplish encryption. One method involves using software, in which case the mathematics required for decryption and encryption is handled by the primary processor of a computer. The second method includes what is known as "delegating" the encryption process to the drive's hardware so that the storage device may do the necessary mathematical operations on its own. The disk then provides the host CPU and memory with newly decrypted data in order to avoid imposing a "performance tax," also known as a lag, on the primary components of the system.

The server-class solid-state drives (SSDs) produced by Samsung are equipped with options for full disk encryption built right into the hardware. This makes protecting company data as easy as checking a box and entering a key. Typically, this entails the drive storing a copy of the decryption key in a protected area within the drive controller circuitry itself, and then encrypting that key with another key that is provided at boot time by the user, such as a multifactor PIN or passcode. This allows the user to access the drive without having to remember multiple passwords or PINs.

What outcomes are possible in the event that computers are taken during a break-in? Would anyone else have access to the information you provide? At boot time, the information would be safe to access if it were encrypted with hardware as long as the key, which is retained by the user, was not disclosed.

Maintaining safety without sacrificing speed

Doesn't the use of encryption make things slower? The mathematics that underlies encryption does take up resources, and the more data you have, the faster your drive needs to read and write, which in turn requires more complicated mathematics. This effect is typically most evident when we consider spinning material that was produced in the past. Users are able to continue being productive and have the peace of mind that the sensitive data they are accessing will be kept secure at all times thanks to the faster performance of solid-state media, which helps to reduce the impact of the encryption’s "performance penalty."

Why encryption and SSD safety are so important

Jul 1, 2022 — 4 min read

Almost everything that can be connected to via a network can be also hacked. But what about cars? Can they be hacked? If so, how much time do criminals have to spend on it?

In fact, hackers are able to shut off your engine while you’re driving, control your steering or brakes, and even open and close your doors and boot. As a result, driving a hacked car can be pretty dangerous.

Finding a hole in your car's software is all it takes for someone to compromise the system. It isn't always that difficult for hackers to find a means to get into your car, even though it could take some time. A committed hacker can enter a reasonably sophisticated system with enough time. According to the research of Upstream — a car cybersecurity organization — by 2025, more than 86% of cars will be connected to the global network. ‘Connected’ refers to the sharing of data among servers, applications, phones, etc. Because of this connectivity, there are several ways that automobiles can be compromised.

What damage can hackers do if they hack your car?

There are multiple ways criminals can hack your car. First of all, the brake pedal and engine are vulnerable. Although your brake pedal is within your control, the onboard computer's microprocessors are what actually cause your brakes to function. Your brakes can be disabled and the engine can even be stopped by hackers who get access to your onboard computer.

Hackers also could interfere with the movement of the car using wipers, heaters, conditioners, or radio. Each of these options could be controlled remotely and used to distract the driver. Although windshield cleaning fluid is helpful, it’s more of a burden when it’s released repeatedly or abruptly. If that’s the case, it might endanger your visibility. Your windshield wipers and this system are both hackable. The same can be said for heating or conditioner systems. They are useful until they can be used to harm you.

Another way of hacking can be performed by unscrupulous repair shops. The majority of initial diagnosis is done by onboard vehicle diagnostics equipment. However, dishonest businesses may trick your diagnostics system into suggesting that you need repairs that aren't actually necessary. This is an easy way for them to earn money. Thus, it’s important to use services that are reliable.

Hackers can also use a car’s interconnected system to impact one’s car safety and its correct operation. This could, for example, lead to forced acceleration. When a car is driven or reaches a given speed, power locks frequently contain functions like automatic locking. Such integrated systems in cars make them susceptible to issues like power locks being overridden to compel an acceleration.

It’s also possible to extend the key fob range to gain physical access to the car. Modern wireless key fobs open automobile doors when the owner is nearby. Thieves who aren’t focused on harming the car owner, but rather looking to steal the car can also exploit the functionality of the key fob and increase its range using radio repeaters. It allows one to unlock the car from up to 30 feet away.

Moreover, if hackers break into your car’s entire system, they could gain your private information, especially if the car is equipped with a GPS telematics system. This data could be misused to invade your privacy and possibly to learn where you live, work, or send your children to school. The serious threat is presented by the connection between your car and your smartphone. Some advanced hackers might be more interested in your connected mobile phone than the automobile’s system. Your information is in danger if they manage to get access to the system in your car and locate the mobile device that is connected to it. The connected smartphone is a direct source of your bank credentials, passwords, and other sensitive data.

Will your car be hacked?

Nowadays, almost every car is susceptible to being hacked. But, talking about chances that you will be impacted by car hacking, it is unlikely you'll experience any issues with hacking at this stage. In any case, it’s better to be safe than sorry. Due to the lack of financial benefit, most hackers prefer not to enter this sphere, with the exception of car thieves who use elements of hacking to neutralize the car’s alarm and relevant security systems.

Car hackers frequently do this for amusement or malicious intent. Very few hackers in the real world have targeted automobiles. Instead, the majority of vehicle hacks are either theoretical or carried out by research teams looking to find weaknesses in the car’s protection. Most car hacks are difficult for average hackers to execute since they typically call for a great deal of knowledge, equipment, and sometimes even physical access to the vehicle itself. However, vehicle makers are still working to develop defenses to shield their products from cyber harm. All due to the potential possibility of hacking attempts. More and more vehicles become connection-available, smart, and independent, so it may lead to an increase in car hacks in the future.

How can you protect your car?

Currently, hackers aren't really interested in your car. However, the situation may change. Hackers may become more interested in and adept at hacking cars as they become aware of their ability to kidnap automobile owners, steal their data, and carry out nefarious deeds like larceny. There are some easy steps that should be done by every car owner to protect their privacy and security.

First of all, do not program your home address into your GPS system. While having a GPS may be handy, car thieves and hackers can use it to locate your home location.

Then, it’s necessary to limit wireless systems connected to your vehicle. You are most in danger from such technologies, as wireless or remote systems are frequently operated online and are more susceptible to hackers than many other systems.

And the last, but not the least piece of advice, use reputable shops, as anyone who gets physical access to your car and is computer savvy can wreak havoc on it. Therefore, when you leave your automobile in a shop, whether it’s for minutes, hours, or days, you run the risk of someone hacking it to make it seem as though you need repairs that aren't actually required.

How easy is it to hack your car?

Jun 16, 2022 — 4 min read

Whenever the word ‘cybersecurity’ appears, the word ‘password’ springs to mind in parallel. People use them everywhere, from mobile phone locks to the protection of personal and state data stored on individual devices or websites. Everyone knows that a strong and secure password is able to save our sensitive information, however, cybercriminals have invented a huge variety of methods to hack our passwords in order to compromise us. So, modern problems require modern solutions. Now, there are a lot of alternative ways to protect access to personal data. The usual passwords are replaced by multi-layer authentication or just more progressive technologies. These are fingerprints and face recognition functions, keychains, and password vaults. But what is the future of passwords? Will they become an outdated option or stay a necessary part of access.

Why are passwords considered weak?

With the growth of cybercrime, the requirements for passwords are increasing. The first passwords consisted of short, easily-memorized word or numeral combinations, but they were too easy to crack. Now, passwords are sophisticated alpha-numeral combinations, sometimes too long to remember. Nevertheless, it is still possible for hackers to find the solution and get access to your account. Passwords are usually based on some common information like a date of birth, the name of a child, or a home pet, which implies that hackers are able to find out what it is if they have enough time. The other reason why passwords become targets is the fact that they provide unrestricted access to your account. Moreover, many people use the same or similar passwords for many different accounts, so they simplify the process of collecting their sensitive data from multiple sources. Of course, using the same password for every account mitigates the risk of forgetting the password, but reusing the combination is quite risky. Users are sure that they won’t be hacked as the data they store is not valuable enough to be stolen, but it’s a common mistake as almost everyone can be compromised or fall victim to a bot attack that is aimed at spreading spam or malicious links. So, the best way to protect your privacy is not to reuse the same password and exploit multi-layer authentication for your accounts.

The anti-password movement

This movement was established as soon as people understood that usual passwords are more vulnerable than they should be. Passwords are inconvenient and provide multiple avenues for fraudsters to obtain your data and profit from it. The most typical method for hackers to profit from this data is to sell it on the dark web for fast cash. Advanced attacks on logins have been known to shut down entire corporations or launch ransomware campaigns. Credential stuffing is the most well-known form of password hacking, it is based on the reusing of the same password for multiple accounts, pairing it with different email addresses or logins. It is usually aimed at taking over as much information from corporate accounts as possible. Thus, internet users realized that passwords are not the most powerful protection that can be exploited for security goals. So, what was made in addition to, or in place of, the password?

Multi-factor authentication

Single-factor authentication refers to the requirement of only one password to access an account. This method of protection has been used for a long time, but now it’s obsolete. The new practice in authentication is multi-factor access which requires passing two or more layers of authentication before accessing an account. The possible steps of this sophisticated technology could be the PIN code, the server-generated one-time code sent to your email address or mobile phone, or even fingerprints and face recognition.

It makes access more complicated but also serves as an additional barrier to compromise attempts and data thieves. This motivates them to move on to more straightforward targets. While it isn't infallible, it does dissuade attackers from trying anything else, potentially rescuing you from disaster.

Another successful way of protection is the passphrase that is used instead of common password combinations. It is represented as the meaningful or meaningless word combination consisting of up to 100 words. It seems to be hard to remember a long phrase, but it is much easier than remembering alpha-numeric combinations including substitution, capitalization, and different numbers. Hackers will find it incredibly difficult to break into a system since passwords are several words long and can contain an endless number of word combinations. Another good thing about such protection is the lack of necessity to install the special apps or systems required to use this technique. It can be applied to every account without special password character limits.

Is the password dead?

The first hacking attacks were conducted as early as the 80s. Regardless of this, people still use passwords as the main protection force for their private information. So, why can’t we replace it with more modern and convenient technologies?

First of all, it’s related to the ease of creating passwords. The password is generated by the user himself, so there’s no need to create and exploit special services that would be able to provide protection for the account on the user’s behalf. Another point is the privacy of users. The password is one of the more private ways of authentication as it doesn’t require any personal information, it can be a random combination of numbers and lack sense, unlike methods such as biomedical data access, which is connected with personal information that could get out into cyberspace. The last but not the least important point lies in the simplicity of replacing passwords. It can be useful in the event of a major data breach, as it’s easier to change the password than the biomedical options that are used for fingerprints or face recognition.


So what will be the future of passwords? Passwords will definitely be used as one layer of a multi-factor security system for the next few years as there are still no more useful options for saving our privacy than passwords. People are continuing to look for the perfect method of protection, so maybe in a few years, something will finally appear and the world will be able to say goodbye to long sophisticated passwords. Some services have already turned to new systems of access, like one-time codes or fingerprints, but there is still a possibility of being hacked. Indeed, users still believe that a multi-layer system of protection is more convenient than any possible alternative.

The future of password security

Jun 15, 2022 — 3 min read

Migration to PHP 8

The new version of Passwork now runs on PHP 8. Previous versions of PHP are no longer supported.

New access rights window

The window with access settings for vaults and folders has been completely redesigned. All users and roles having access to a vault or folder are now collected here as well as links and sent passwords.

The rights can now be edited on each tab by selecting multiple objects at once. All modified and deleted objects are marked by an indicator until you save changes. Search filters allow you to display all objects with a certain access right.

Ability to quickly view who accessed vaults and folders

When hovering over an icon next to the name of a vault or folder you can see some brief information about the number of users, roles, links and sent passwords.

Clicking on a list opens up the window for access rights management inside a given vault or folder.

Granting access to individual passwords without adding users to a vault

In previous versions of Passwork, it was possible to send a password copy to users. In the new version, users will see the original password in the Inbox, which will be updated when the original vault changes.

That means you can now give access directly to a password without adding users to a vault or folder.

You can send a password and enable users to edit it, then when a user changes this password, it will be updated for you as well.

Ability to add TOTP keys and then generate 2FA codes

When adding and editing a password, you can add a TOTP field and enter a secret code to generate 2FA codes. The generated code is updated every 30 seconds.

The "Password" field is now optional, so you can keep 2FA codes separate from main passwords.

Adding TOTP keys and generating 2FA codes is available in the web version, browser extension, and mobile app.

Failed login attempts are now displayed in the action history

The action history displays all failed user authorization attempts. This allows you to better track unauthorized access attempts and the actions of blocked users.

You can see all failed login attempts on the Activity Log page by enabling a filter in the Action column.

Ability to enable priority authorization using SSO

The new version of Passwork now allows you to enable SSO priority authorization for all users. You can enable it in the "SSO settings" section.

With this option enabled, only the "Sign in via SSO" button is displayed on the authorization page, the login and password fields appear only when switching to the standard authorization.

Optimized work with a large number of users

Passwork has been tested and optimized for 20,000+ users.

Improved LDAP integration

  • Test mode for LDAP roles and groups linking
  • Saving LDAP logs to a CSV file
  • Updating user attributes during synchronization with LDAP directory

Mobile app update

  • Passwork 5 support
  • Ability to copy passwords on long press
  • New home screen view with separating by type of vault
  • Inbox passwords
  • Improved search mechanism
  • Debug mode

If you are already using Passwork, update your version
How to upgrade Passwork 4→5 version.

Or request a free demo at

Introducing Passwork 5.0

Jun 9, 2022 — 4 min read

Are you sure that your home is protected in the way that you think? Sure, you can secure it with modern locks or an alarm system to protect yourself from robbers who want to steal your money or furniture, but what about those who are looking at your home as a means of stealing your privacy?

As the number of smart electronic devices we use every day increases, we have to make sure that the personal information that is recorded by these devices is safe.

So let’s talk about home security and how to protect yourself from those that are looking for ways to hack your smart devices.

Which smart devices can be hacked?

Almost every smart system used with modern devices is potentially dangerous as hackers know hundreds of ways to obtain remote access to them. But still, some devices seem too ordinary and primitive to be hacked. Perhaps a robot vacuum cleaner or a smart baby monitor. But there are more sophisticated technologies like a smart TV or smart house security system. They're all vulnerable since they're connected to the internet and are frequently part of your home WiFi network. Recent research showed that every one of them has several serious security flaws.

What are the risks?

Many experts note that when it comes to smart home devices, you should be thinking about ‘when’ they will be hacked, not 'if,' because many are notoriously easy to hack and provide no protection whatsoever. Scientists from the European watchdog Eurovomsumers examined 16 regularly used devices from a variety of manufacturers and discovered 54 vulnerabilities that exposed consumers to hacker attacks, with potential implications ranging from security system deactivation to personal data theft.

According to the results of research, hackers can gain access to highly sensitive information such as banking credentials or even utilise many linked devices to stage enormous distributed denial of service (DDOS) operations, which allows them to ruin banking or other service networks.

Whenever most internet users realise the vulnerabilities associated with the usage of computers connected to the Internet, many people still do not pay enough attention to the fact that their home smart devices also present the same danger. As all home devices are commonly connected to the same Wi-Fi network, it gives an opportunity for hackers to get access to all domestic technologies at the same time.

Security gaps

One of the most significant dangers that are presented by smart home devices is the potential for a ‘deauthentication attack’, in which a hacker orders the device to disconnect from the house WiFi. It may cause the blocking of systems and devices, which won’t be able to respond to users’ requests as a result. It was also discovered that some apps designed for home appliances are able to transfer unencrypted data. It means that if hackers break into their system, they’ll gain access to the owner's personal information, such as WiFi passwords or even listen to what happens around the device if it’s equipped with a microphone. A stolen WiFi password may provide hackers access to phones or computers connected to this network and lead to an eventual data leak.

Due to the gaps in security systems, smart devices often have flaws that make them vulnerable to attack. Designers of these devices focus on the comfort of exploitation and multifunctionality of their products, but not on their security. But now, when almost everything from house alarms to refrigerators can be hacked, it becomes a paramount point.

Recent research that took place in America and Europe has shown that about a half of interviewees use smart home devices, but most of them do nothing to protect themselves from being compromised. Thus, even though people know about the risks, they still do nothing to minimise them. One of the possible reasons for such behaviour is the lack of knowledge and accessible information about how to make the usage of smart home devices secure.

How can you secure your home devices?

Of course, the most basic way to protect yourself from the hacking of your smart home devices is just not to use them and replace them with less functional but safer options. But what if you can’t go without such a pleasure? Well, Euroconsumers — one of the most well-known private organisations for consumers — developed a list of recommendations that can help people who want to maintain their privacy while using smart devices:

1. Use an ethernet cable instead of Wi-Fi to connect your devices to the network where possible;

2. Create strong multilayered passwords for your devices and Wi-Fi;

3. After installing your WiFi network, always change the default name;

4. Always keep your devices up-to-date and switch them off if you’re not using them at a certain moment;

5. When you use a device for the first time, always finish the setup procedure;

6. Do not buy cheap devices with a low level of protection.


When we’re talking about smart devices, we’re not just talking about full smart house systems such as alarms. Rather, we’re talking about smart appliances such as TVs, doorbell systems, vacuum cleaners, and other common household things. Using them makes our lives more comfortable and saves time and energy. However, they each have their own flaws, and many are vulnerable when it comes to hacking. So, consumers should pay attention to this point of using smart devices and consider all possible ways to protect their privacy without refusing to exploit such useful appliances. If you use one of these devices, try to get more information regarding what manufacturers pay more attention to regarding the security of their goods. Moreover, make sure to protect your own devices from hacking. It won’t take a lot of time or effort, but it will save your sensitive data and protect you from being compromised.

How secure are smart home devices?

Jun 8, 2022 — 5 min read

If you still haven’t heard about Starlink, you’ve definitely heard about its creator — Elon Musk.

Elon Musk is a billionaire entrepreneur most famous for his electric vehicle firm, Tesla, and his space exploration company, Space X. Maybe you learned about him from our news headlines talking about his attempts to acquire Twitter or his past endeavors stirring up trouble on social media. Perhaps you only know him as one of the world's wealthiest people. Starlink is the less known facet of Elon Musk’s career that is focused on providing internet to every part of the world including hard-to-reach places, and that’s what we’re going to be talking about today.

Starlink is the name of the global and constantly growing network of orbital satellites overhead, based on SpaceX technologies. This project began in 2015, and the first prototype satellites were sent into orbit in 2018. In January 2021, after three years of development and successful launches, Starlink reached 1,000 satellites. Over the course of the next year, this number doubled. Now, Starlink has more than 2,000 operational satellites orbiting the Earth. Indeed, it’s still just the beginning. The plan will be completed as soon as the network covers most parts of the Earth’s surface. To do this, Starlink requires about 12,000 satellites in orbit.

Currently, the project’s creators are assuring the provision of its service in 32 countries. This number will increase every year. However, the budding broadband provider still has a backlog of prospective customers waiting to receive equipment and connect to the system.

Starlink offers high-speed broadband internet, the spread of which, according to the speed-tracking website Ookla, is above 100mbps in more than 15 different regions. When we’re talking about the United States, Starlink offers average download speeds of around 105Mbps and upload speeds of around 120Mbps, which is about five or six times faster than their satellite rivals. Elon Musk is focused on doubling the average internet speed and reaching 300 Mbps. In any case, even now, we can observe his success as the Starlink Internet system really is one of the fastest in the world.

How much does it cost?

The initial cost of the service was $99 per month, and the initial one-time payment for the satellite dish and router was $499. As Starlink is focused on the availability of the internet, it was announced that the cost of the service is going to decline in a few years. But, in March of 2022, the company announced a price increase. So now, the monthly payment is $110 and the initial payment for the equipment is $599. This price is quite high for satellite Internet, but the creators of Starlink bet on the wide coverage of their network and its availability in hard-to-get places.

As the president of SpaceX said last year, Starlink aimed to keep pricing as straightforward and transparent as possible, and that there were no plans to add more levels to the service. However, in 2022, a new premium level with a scanning array twice as large as the normal plan and download speeds ranging from 150 to 500Mbps appears to be modifying that strategy. This option costs $500 per month, and the initial payment for equipment will be $2,500. Now the company is taking orders for that tier, with the service set to arrive later in 2022.

Starlink, like any other modern technology, has some benefits and drawbacks. Let's take a quick look at them.

The pros of Starlink:

1. Faster Internet. The internet offered by SpaceX is definitely faster than traditional satellite Internet. Starlink is so quick that it's almost impossible to compare it to traditional satellite connections.

2. Relatively cheap. Starlink's internet service is reasonably priced. In rural and suburban locations, it is less expensive than cable and satellite internet. Suburban consumers pay the same price as city residents in many areas, but they get much slower internet.

3. Wide availability. Regardless of your location, Starlink is available to every customer. It has wide network coverage and provides fast, unlimited Internet from Antarctica to the middle of the ocean.

4. Faster disaster recovery. Storms, tornadoes, wildfires, and floods can all cause internet cable to be seriously damaged. After any type of disaster, the recovery of the cable Internet takes quite a lot of time. The process of fixing it isn’t just costly but also time-consuming, unlike Starlink. The Internet will be available straight away after the disaster.

The cons of Starlink:

1. Hardware installation. For many users, hardware installation could become a problem as the creators of Starlink don’t provide the installation of the equipment needed for using their network. Thus, customers have to install the equipment themselves or hire professionals to spend extra money.

2. It’s not portable. When compared to cellular internet, Starlink is not as portable. We can use our phones to access the internet from any place. The Starlink dish is not at all portable. Though the dish can be installed above an RV or a boat, unfortunately, it is not small enough to be carried easily.

3. Service disruptions depend on the weather. It's common for satellite service to be disrupted by rain, storms, or solar flares. However, this isn't a major pro for cable internet either; it’s also subject to this type of disruption.

As the number of Starlink’s users increases, the question of the security of this Internet connection has become acute. People want to make sure that the provider that they use is safe enough and that nothing threatens their personal data.

The main problem of satellite Internet is that some of the information carried by satellites can be intercepted as it is translated to and from the Earth. Some of that data can also be changed before it reaches its intended destination. This does not, however, necessitate the use of specialized equipment. According to a recent study, this could be accomplished with $300 worth of equipment. It's vital to keep in mind that this issue does not affect all traffic. If you're using an encrypted connection, this form of assault is likely to be unsuccessful. However, it does underline the reality that as satellite internet becomes more ubiquitous, cybercriminals will have additional chances.


Starlink is a quickly growing and highly available technology that is just at the start of its development. However, it already could demonstrate great advantages over the cable network. Like any modern technology, it has several disadvantages, such as weather dependence and the risks related to satellite networks. Now, SpaceX promises a very high level of service with wide coverage, but as practice shows, not all of their promises are worth trusting. If you’re going to get the Starlink dish, you have to consider this issue deeper to make sure that you’re making the right choice.

How secure is Elon Musk’s Starlink?

May 27, 2022 — 4 min read

From smartphones to automobiles, almost every device is equipped with Bluetooth technology nowadays. Many people use it every day while connecting to headphones, sending files, or making remote calls in their cars. However, most people are unaware that using Bluetooth carries a number of risks when it comes to your privacy and safety.

What is Bluetooth?

Bluetooth technology is a standard for creating a local network that allows neighboring devices to exchange data wirelessly. In other words, you can use Bluetooth to transfer data between devices such as your phone and headphones without the use of a cable. Bluetooth is widespread and free to use, that’s why it is so popular with device creators and consumers.

Bluetooth was invented in 1994 by Ericsson — the telecommunications equipment manufacturer. Now, you can find this technology in almost every electrical device around the world. Even smart household appliances are equipped with Bluetooth nowadays, so you can send instructions to your refrigerator or vacuum cleaner remotely.

Bluetooth hacking

Of course, as with most standards, Bluetooth has its disadvantages and security vulnerabilities. Bluetooth allows devices to communicate with one another across short distances and for a limited time. As a result, most Bluetooth hackers focus on getting close to a target and carrying out the assault in a short amount of time. Particularly in areas where people tend to linger around. There are a number of places that pose a great amount of danger to your devices. For example, cafes, the underground during rush hour and on the bus.

However, when the attacker’s target moves out of range, it could stop the attack and ruin the hacker’s plans. It's worth noting that some attacks can be launched from hundreds of meters away. So moving a few steps isn't the same as being out of range.

Some hackers are also able to control your device for under 10 seconds using Bluetooth. Even more concerning is the fact that hackers can accomplish this without engaging with the user.

There are a variety of Bluetooth hacking techniques:

1. Bluejacking

This type of cyberattack on Bluetooth connection lies in sending spam messages via Bluetooth. One Bluetooth-enabled device hijacks another and sends spam messages to the hijacked device. First of all, this can be annoying to get such spam. But if you click it and accept files from an unknown device, you may get into big trouble. The message may contain a link that will lead to a website that is designed to steal your personal information and compromise you.

2. Bluesnarfing

This type of attack is similar to the previous one but much more detrimental to your privacy. During these hijacking attempts, hackers can not only send spam messages to one’s phone, but also collect some private information like chat messages, photos, documents, or even credentials from the victim’s device. All of this will be used to compromise you or for extortion attempts.

3. Bluebugging

This is the last and the most dangerous type of Bluetooth hijacking. Hackers use your device to establish a secret Bluetooth connection. This connection is then used to acquire backdoor access to your device. Once inside, they can monitor your activities, gain your personal information, and even use your personality on your device's apps, including those used for online banking. This type of assault is known as blue bugging since it resembles bugging a phone. Once hackers get access and complete control over the phone, they get the opportunity to make phone calls themselves and listen in on every phone conversation.

Bluetooth security concerns

If you think that the direct invention of hackers is the only danger that Bluetooth presents, we have some bad news for you. Many apps including popular ones such as Google or Facebook can monitor the location of users through the use of Bluetooth technology.

By switching on Bluetooth, you enable the transmission of information, but you also enable your device to catch adjacent Bluetooth signals. Thus, Bluetooth signals are used by app developers to pinpoint your location. So, the IT companies that develop apps can find out the information about your location wherever you go and keep track of your everyday activities. The most terrifying aspect here is that Bluetooth enables extremely precise tracking. The good thing is that most app creators write that “the usage of their apps requires Bluetooth utilization” in their privacy statement. Unfortunately, the majority of consumers do not read the privacy statements of the apps they use, so they automatically accept all the requirements and rules of the new app.

To protect yourself from activity and location tracking, you should read each app’s privacy policies and not use apps that require Bluetooth. If you determine that some of the apps you regularly use are requiring Bluetooth, you can disable the location tracking function for them.

What do we need to do to safeguard our Bluetooth connections?

In mentioning all of the risks associated with the use of Bluetooth, we have to give you some advice regarding the safeguarding of your devices.

1. Make your Bluetooth device non-discoverable. This can be done in your device’s settings.

2. Do not send any sensitive information via Bluetooth as it can be caught by intruders.

3. Do not accept any files or messages from unknown devices via Bluetooth, especially in crowded places.

4. Always turn your Bluetooth off after using it to prevent unwanted connections and breaches.

5. Don’t share anything via Bluetooth in crowded places, even if you want to connect to your friend’s device.

6. Install some security patches to protect your device and stop any possible tracking via Bluetooth.


Bluetooth is a common and useful technology that is used in almost every device due to its convenience and fast connection. But the simplicity of its technology leads to several flaws, which is why Bluetooth can’t be named a very secure standard. Nevertheless, most people cannot avoid using this technology — it’s just too widespread. To keep your device safe, we recommend following the aforementioned security rules.

How secure is Bluetooth? A complete guide on Bluetooth safety

May 19, 2022 — 4 min read

What is WebSocket?

The WebSocket API is a cutting-edge technology that allows the opening of bidirectional interactive communication sessions between a user's browser and server. You can use this API to send messages to a server and obtain event-driven responses instead of polling the service. WebSocket is a stateful protocol which means that the connection between the client and server will remain open until any of the parties terminate it.

Consider the client-server communication: when the client initiates the connection with a server, a handshake occurs, and any other request will go via the same connection until either of the parties closes the connection.

WebSocket is a good thing for services that require constant data transmission, like network games, online trading sites, and other websites that work continuously.

Where is WebSocket used?

1. Real-time web applications. Such services use the WebSocket to provide constant data translation to the client. This type of connection is preferred over HTTP as continuous data transmission goes through a connection which is already open. This makes the process much faster. A good example of a real-time web application is a Bitcoin trading webpage that continually pushes constantly changing data about the bitcoin’s price to the client;

2. Gaming applications. In such applications, data must be constantly transmitted from the server to the client’s computer. Otherwise, the collective acts between multiple users of the application will be unavailable;

3. Chat applications. WebSockets are used by chat applications to create a connection just once for the purpose of exchanging messages, video and audio between the interlocutors.

The Vulnerabilities of WebSocket

WebSocket technology causes a lot of excitement and at the same time disagreements among web developers. Despite all the benefits that it provides, it still has some risks as the technology is relatively new. Due to the complexity of WebSocket programming, it’s hard to provide comprehensive security for applications that use this technology. The constant transfer of data without closing the connection after every request opens up an opportunity for hackers looking to acquire access to the client’s data.

In early versions of WebSocket, there was a vulnerability named ‘cache poisoning’. It allowed the attack of caching proxy servers, particularly cooperative ones. The attack occurs in the following manner:

1. The attacker invites the client to attend a special webpage;

2. This webpage starts the WebSocket with the hacker’s website;

3. The page makes a WebSocket request that can’t be accepted by a number of proxy servers. The request passes through that server, and after that, the servers believe that the next request is the new HTTP one. But in fact, it’s a WebSocket connection that continues translating data. Both ends of the connection are now controlled by the hacker, so the hacker can transfer malicious data through the open connection. The deceived proxy-server will get and cache the malicious data;

4. Indeed, every user who utilises the same proxy-server will get the hacker’s code instead of real jQuery code.

The risk of such an attack had remained theoretical for a long time, until an analysis of WebSocket’s vulnerability showed that it really can happen.

Due to the existence of that vulnerability, WebSocket’s developers introduced ‘data masking’ to protect both parties of the connection from attacks. Masking prohibits security tools from doing tasks such as detecting a pattern in traffic.

WebSockets aren't even recognized by software such as DLP (Data Loss Prevention). They are unable to analyse data on WebSocket traffic as a result of this. This also makes it impossible for these software tools to detect problems such as malicious JavaScript and data leaks. It also makes the WebSocket connection more vulnerable than HTTPS.

Another disadvantage of WebSocket protocols is that they don’t manage authentication. This must be handled individually by any application-level protocols. Especially when sensitive information is being transferred.

The next type of cyber attack that WebSocket can be exposed to is tunnelling. Anyone can use WebSockets to tunnel any TCP service. Tunnelling a database connection right through to the browser is an example of this. A Cross-Site Scripting attack evolves into a comprehensive security breach when a Cross-Site Scripting assault is carried out.

Also, it’s necessary to know that data transfer over the WebSocket protocol is done in plain text, similar to HTTP. As a result, man-in-the-middle attacks on this data are the real threat. Thus, it’s better to use the WebSocket Secure (WSS:/) protocol to avoid data leaks.

How can we improve WebSocket security?

After looking through the main vulnerabilities of WebSocket, it’s necessary to take a look at the ways and tools that are able to protect your WebSocket connection.

First of all, good advice would be to use the wss:// protocol, instead of ws://. It’s really much safer and able to prevent a huge number of attacks from the outset.

Also, it’s necessary to validate the data that comes from the server via a WebSocket connection. Data returned by the server can potentially be problematic. Messages received from clients should always be treated as data. It's not a good idea to assign these messages to the DOM or evaluate them as code.

Another way to protect your connection is via a ticket-based authentication system. The separation of the WebSocket servers that handle headings of transmitting data from the HTTP servers hinders the authorization of headings that are based on HTTP standards. So, ticket-based authentication is a solution to this problem.

So, how secure is WebSocket?

To sum up, we can say that WebSocket doesn’t have a perfect security system, as is the case with any new kind of technology. It’s all due to the complexity of its creation and maintenance. WebSocket has enough vulnerabilities, such as a lack of authentication measures or its data input attack susceptibility, to enable attackers to transmit malicious codes. So, one should always be wary of this fact.

However, WebSocket is a progressive technology that is great to use in some spheres like gaming or trading. That’s why it should be improved to make its usage secure for every connected client or server.

How secure is WebSocket?

May 12, 2022 — 4 min read

If you’ve ever set up a wireless router on your own, you’ve probably heard of WPS. You might come across this term in the router’s configuration menus or see it on the backside of your router — but do you know what WPS actually means and how it works? If you can’t answer these questions yourself, then you’re in the right place.

What is WPS?

WPS stands for WiFi Protected Setup. It’s effectively a wireless network security standard that speeds up and simplifies the process of connecting your device with a router. It helps to do it quickly without entering a Wi-Fi password. To enable WPS you should find a tactile button located on the backside of your router or switch it on in the configurations menu of the router. When you turn it on, WPS mode allows you to connect your various devices to your router using the WPS password, also known as the WPA-PSA key.

In fact, WPS is not responsible for the Wi-Fi connection at all. It’s designed solely to send the connection data between the router and the wireless device. Remember, that’s an important distinction.

WPS was an idea of the nonprofit ‘Wi-Fi Alliance’. The alliance is effectively an association of the largest companies that create computers and Wi-Fi devices. More than 600 members take part, including companies such as Microsoft, Samsung, and Intel. Alliance was founded in 1999 to promote Wi-Fi technologies and certificate Wi-Fi products around the world. This standard was created in 2007 to simplify the connection process and since that time, most Wi-Fi systems around the world have adopted it.

How does WPS work?

If you want to connect your wireless device, you have to know the password to the Wi-Fi network. This process isn’t difficult but it takes some time to get the essential data. WPS makes it easier and a bit quicker.

There are some different ways to do it. First of all, WPS can be a workaround for connecting to Wi-Fi without a password. To do so, you should hit the WPS button on your router to enable device detection. Then, take your device and choose the network you need to connect to. The connection will be immediately available and the system won’t ask you to enter the password.

Some wireless electronic equipment like printers also has a WPS button that can be used to make rapid connections. All you have to do is to push both buttons, on the device and on the router, to get access to the wireless network. You don’t need to enter any data here, as the WPS delivers the password automatically. Also, that device will be able to connect to the same Wi-Fi router without pushing WPS buttons in the future as the password will be remembered.

The other option requires one to use the eight-digit PIN code. When WPS is enabled on a router, a PIN code is produced automatically. The WPS PIN can be found on the WPS setup page. Some devices that lack a WPS button will require the PIN. If you enter the wireless network, they verify themselves and connect to it.

The last option also can be done by using that eight-digit PIN. Some devices do not have the WPS button but also support WPS, so they will produce a client PIN that will be used by the router to connect the device to the network. You should just enter the PIN in the settings of your router to get access.

Unfortunately, methods that require using a PIN code don’t have any benefits in the speed of the connection process. You spend the same amount of time entering the router’s password and the WPS PIN, so you should just choose the way that’s more comfortable for you.

Which devices work with WPS?

WPS is supported by a wide range of devices, most commonly, wireless routers. However, you can also find a WPS button on wireless printers, Wi-Fi Range Extenders and Repeaters, which commonly provide WPS capabilities as well. Finally, the WPS functionality is available on a few higher-end laptops, tablets, smartphones, and 2-in-1 devices, where it’s usually implemented via software rather than physical buttons.

What are the advantages and disadvantages of WPS?  

Despite the fact that WPS is embedded in most Wi-Fi equipment, the benefit of this standard is still a controversial issue. Some professionals opt for using it as it makes the connection to the router easier and quicker while others opt against it as WPS mitigates the security of the connection process.


1. It's quick, especially if both the router and the client device have the WPS button.

2. It's simple and requires no technical knowledge. There is no more primitive way of connecting Wi-Fi than pressing the WPS button on both the router and the client device.

3. Support is relatively strong. WPS is supported by all routers and most networking devices. WPS can also be used to establish rapid Wi-Fi network connections on the most common operating systems like Windows, Android, and Linux.


1. It isn't really safe. WPS connections using PINs appear to be particularly sensitive to brute-force attacks. A successful WPS attack allows an attacker to obtain access to your Wi-Fi network, and disabling WPS is the only viable remedy.

2. WPS can be used by anyone who has physical access to the router. So any person who is aware of the router’s location can connect it without your permission.

3. WPS is not supported by Apple. You can't connect to Wi-Fi using WPS if you have a Mac, an iPhone, or an iPad. This is because Apple has determined that WPS is insufficiently secure, and thus WPS isn’t not supported by any of the devices.


As we’ve found out, the WPS network’s security standard has both benefits and limitations. On the one hand, it helps us to avoid remembering the Wi-Fi password and connect quickly. On the other hand, WPS is not secure enough to foster user confidence across the board. So, it’s up to you to decide on using WPS or not. In any case, you can disable the function at any time you want by simply switching off the WPS button.

WPS – What is it, and how does it work?

May 5, 2022 — 4 min read

Upon entering your account on a website or in an app, you might be asked to enter a word or a number combination from a strange-looking picture. They are usually distorted and sometimes it can take a few seconds to determine the symbols on the picture. This security step is named CAPTCHA and seems to be useless and tedious, especially if you have some problems with recognizing and entering the right combination. But to be honest, this simple test plays an important part in the security system as it makes access to websites or online purchases wholly unreachable for bots and computers.


CAPTCHA is an abbreviation that can be decoded as a Completely Automated Public Turing Test to Tell Computers and Humans Apart. It’s a type of test that helps the websites’ creators minimise the ability of a bot's registration or purchasing power. They're also referred to as "Human Interaction Proof" (HIP). CAPTCHA is widely used across the internet and mobile apps alike. The most common type of CAPTCHA is the picture that contains distorted letter combinations that you should comprehend and write down in the answer box. If you wrote the right symbols, the system gives you access to the site or to the following task. You can also see a variety of CAPTCHAS on different websites. Some of them require you to take a look at a number of pictures and choose those that contain a target object such as bicycles or traffic lights.

How does CAPTCHA work?

CAPTCHA came about mainly because of certain individuals’ attempts to trick the system by exploiting flaws in the computers that power the site. While these individuals are likely a small percentage of total Internet users, their activities have the potential to harm a huge amount of websites and their users. A free email provider, for example, might be inundated with account requests from automated software. That automated application could be part of a wider scheme to spam millions of people with junk mail. The CAPTCHA test is used to determine which users are genuine people and which are computer programs.

The internet and its computers are built using a proprietary coding language. Because of the unique and complex norms that human languages adopt, as well as the slang that humans use, computers have to spend a lot of time understanding them.

Most CAPTCHAs include visual tests, which the “brain” of the computer can’t figure out; it’s much less sophisticated and it's definitely harder for them to determine the pattern in pictures. While humans will spend a few seconds on CAPTCHA, artificial intelligence has to spend much more time on finding a consistent pattern.

There’s also an alternative to a visual CAPTCHA — one that is based on audio access. That type was developed to make it possible for CAPTCHA to be passed by those who have a visual impairment. Although there are more deaf than blind, approximately 75% of all adults require some kind of visual correction, so it’s much more likely that you’ll encounter someone who can’t focus on the letters on screen. After all, they are usually quite hard to read. Usually, audio CAPTCHA is a succession of spoken characters and numerals. Frequently it also is accompanied by background noises and sound distortion to protect against bots.

The third type of CAPTCHA is a contextual one. The task for the user is to interpret some text with his or her own words, keeping the main idea of the passage. While computer algorithms can recognize significant terms in literature, they aren't very adept at deciphering the meaning of those words.

It’s also important to distribute the CAPTCHA pictures in a random order to every user. If imaging would repeat constantly or would be displayed in a specific order, it would be easy for spammers to trace the subsequence of the pictures and program a computer system that would be able to pass the test automatically based on the CAPTCHA’s order.

Turing test

CAPTCHA was based on the Turing Test. Alan Turing, an ingenious mathematician, who was named “the Father of modern computing”, suggested this test to find out whether the computer is able to think like a human or not. The point of the test is that there are a number of questions that must be answered by two participants. One of them is a real person while the other is a computer. There’s also an interrogator whose task is to find out which answers were given by the machine and which ones were given by the human. If the interrogator isn’t able to understand who is who, the test has been passed.

The main goal of CAPTCHA’s creators was to create a test that could be easily passed by a human, but not by a machine.

The pictures that we see on the screen that we need to pass the captcha test are usually very complicated as it must be possible for every user to enter it. But bots are mostly unable to determine the text that is presented in the form of a picture.

Who uses CAPTCHA?

CAPTCHA is a type of verification tool that is widely used by websites and apps to ensure that a user is not a robot.

It is usually used to protect online pools from bots’ votes employed by scammers to cheat. Another purpose of using CAPTCHA is to restrict access to websites where consumers can create free accounts, such as Gmail. Spammers can't use bots to establish a slew of spam email accounts because of CAPTCHAs.

CAPTCHA is also used by ticket services to prevent profiteers from buying too many tickets for big events. This helps honest people to buy tickets in a fair manner while preventing scalpers from putting in hundreds of orders.

Finally, CAPTCHA is used to prevent spamming messages or comments on websites where it’s possible to contact the page’s user directly. It helps to stop bots from automatically sending spam and spoiling the ratings of products or services.

To sum up, CAPTCHA is a good tool to prevent the creation of spamming bots or automatically controlled web pages that spread viruses. It helps the creators of apps and websites to verify that a user is a real person, and not a computer programmed to spoil the system. This small but necessary stage of identification of the user is really helpful and recommended for exploitation on any websites where users are able to create free accounts. If you want to use CAPTCHA to protect your own website, you should be aware of the numerous failure possibilities that you may encounter. We recommend using a service like Google's reCAPTCHA to generate one for you. It’s also a good idea to download an antivirus program and use it together with CAPTCHA to keep your device secure.

CAPTCHA — How does it actually work?

Apr 28, 2022 — 5 min read

When we look at the statistics, the number of cybercrimes increases year on year. Hackers have invented a wide range of tools that can crack your password or get your access information with ease. But there are also other ways of violating your privacy. Every click you make is tracked by websites, advertising agencies, ISPs, and other third parties. Thus, you need to secure your privacy online using a web browser optimised for making your web-surfing secure. So, which browsers are really designed to preserve your personal data and prevent leaks?

Everyone knows that Microsoft Edge or Safari are built into smartphones and laptops supported by the corresponding operating systems. So most users are unconcerned about the browser they use assuming that the default option is the best one. Although browsers like Safari, Google Chrome, or Opera are the most common, they really can’t pretend to be the most secure and privacy-conscious. Indeed, there are some less common but highly powerful and privacy-focused browsers that could provide you with plenty of embedded privacy settings that can block cookies, ads, and data tracking. It’s pretty difficult to name the best one as each of them contains its own privacy features enabling them to become contenders for victory. So let’s go through these browsers and their privacy customizations to help you make the call.

1. Tor Browser

Tor is one of the well-known privacy-oriented browsers. It is based on Firefox’s browser and equipped with its own hidden relay servers that are focused on anti-surveillance functions. It automatically erases your cookies and browsing history on the fly. Tor makes users’ internet access anonymous by encrypting their traffic in at least three separated layers (nodes) that are decentralised and run by volunteer computers. Each node focuses on a single layer of encryption, making it impossible to get the entire message in any of them. Thus, no one can trace your online activity or identify you until you deliberately identify yourself. The unique technology of Tor is also available to lessen the uniqueness of your fingerprint, which is an unmatched feature in successfully decreasing the possibility of identifying a user.

Of course, Tor Browser has its drawbacks. Due to the complexity of protecting certain processes, the speed of your internet could be affected. Also, the NoScript function may break some websites. Moreover, there is a possibility that law enforcement is able to see who is using the Tor browser, even if they are not aware of what people do there.

2. Brave

Brave was founded in 2016 and despite it being a quite new browser, it is worth considering. This privacy-focused browser is based on a Chromium network and truly packs a punch with features like an ad-blocker, anti-tracking, and anti-fingerprinting technology. Brave also automatically changes your connection to HTTPS, as it's always important to have a safely encrypted connection. The security options of this next-generation browser enable you to choose which data to erase when you exit the program. Another useful thing is the embedded feature that could prevent scripts from launching. Recently, Brave entirely switched to a Chromium core, which simplified the process of bringing over their Chrome extensions. It allows the browser to be made more functional and convenient. Nevertheless, users still should be cautious in selecting extensions, as it’s important to use ones that respond to individual security requirements.

Despite the fact that Brave is open-sourced, some users may be wary of its Chromium foundation. Brave’s advertising model is particularly contentious, as it favours adverts that benefit the browser over those that benefit the websites you visit.

3. Mozilla Firefox

Firefox is well-known for the variety of settings and extensions that it provides for its users. It’s one of the more commonly used alternatives to Chrome or Safari. Although Firefox doesn’t release updates as frequently as the previously mentioned browsers, it does so on a regular basis. If we take into account the fact that Mozilla is a non-commercial organisation, it’s worth evaluating the work of the company’s volunteers, who do a great job to ensure that Firefox has the most up-to-date security systems in place. Firefox’s security features really have something for everyone. It includes phishing and malware protection, for example, banning attacked websites, and informing the user about attempts of installing site add-ons. It’s also equipped with ad blockers and tracker-detection systems. Moreover, it’s easy to use due to its minimalistic style and simplicity. However, be sure to turn off the telemetry feature as it shares your browser’s data with Mozilla. Such features might disturb the users that appreciate their privacy, but that can be easily disabled via the settings tab.

4. Epic

The main point in using Epic is that you’re able to benefit without changing the built-in extensions. It’s already configured to make the process of internet access secure and confident. Its customizations will block cookies after every session, preventing data tracking and unwanted ads. Epic also has an option to search the information via DuckDuckGo (A privacy-focused search engine that is a good tool to ensure your personal data stays safe). This browser is fundamentally aimed at making your internet access private. It disables auto-syncing, spell-check, auto-fill, and many other functions that can collect users’ personal information. Of course, it doesn’t save you browser history, access credentials, and cookies. Epic also aims to hide your IP address in any possible way in order to protect the information about your location and your device’s data.

While the settings focused on high privacy can disable some sites and functions, which means the browser is not ideal in all cases, extensions can be changed at the expense of privacy.

The explicit disadvantage of the Epic browser is that it’s based on the Chromium code, which is not open-sourced. So there’s no certainty that this code will be independent in the future.


So, what browser can be named the best one suited to protecting your identity and personal data? Well, none of them can be completely private. There are a number of browsers that claim to be the most private and protected ones, but even they have flaws that can disturb some users. So, you should choose the browser that mostly satisfies your own requirements and seems to be the best one for you. In this article, we’ve collected some facts on the ‘chef’s pick’ of browsers. Now, the decision is in your hands.

What is the most private internet browser in 2022?

Apr 21, 2022 — 4 min read

What is a brute force attack?

Among a myriad of different cyberattacks, the brute force attack seems to be the most common and primitive way of hacking. This technique involves guessing login information through trial-and-error, where hackers try all conceivable combinations in the hope of guessing correctly.

The term “brute force" refers to the method itself, being both brutal and forceful. Despite the fact that brute force attacks are a pretty ancient cyberattack approach, they still remain a prominent technique among modern-day hackers.

Types of brute force attacks

A brute force attack can be split into a few different types, each kind employing a variety of techniques that serve to unearth your private data. You should be aware of how cybercriminals apply each type in order to ensure maximum protection.

1. The simple brute force attack — this refers to the process of simply guessing the login credentials via logical deliberation, without the use of any software. Hackers just go through every standard combination of letters and numerals, perhaps combining this with some information that they know about you. This method is cumbersome yet reliable as many people still use primitive and common passwords and PINs like “user1” or “12345” in order to remember it easily. Also, users, who use the same password for every account put themselves in extra danger; if the hacker does guess one password correctly, then it’s likely that they’ll be using that password as the first port of call when it comes to other accounts.

2. A dictionary attack — this is a type of brute force attack that involves the user submitting a very large variety of different password combinations. Although this kind of assault is technically a brute force attack, it takes a significant place in the process of cracking passwords. The name of this technique comes from the actions that a hacker performs during the break-in attempt. The criminals scan through password dictionaries modifying words using different numbers and abbreviations. It usually takes a lot of time and has poor success rates when compared to newer techniques. However, it’s easy to do if you have a computer at your disposal.

3. A hybrid brute force attack — this type combines the two that we’ve just looked at: the brute force attack and the dictionary attack. Combination passwords, which mix common words with random characters, are cracked using these approaches. Usually, it starts with a certain username which is used as a base for the following actions: hackers input a list of words that potentially could be included in the password, then combine them with different characters and numbers until they reach the correct password.

4. Reverse brute force attack — contrasting with other types of brute force attack, the reverse attack starts with a known password. Usually, hackers get these from leaked databases that are freely available on the internet. Attackers choose one password and look through millions of accounts until one matches. Of course, it’s easier for the criminal to locate a match when the password includes a name or a birth date, so it’s better to avoid using such information in your password.

5. Credential staffing — this type of attack is based on users’ cybersecurity illiteracy. Hackers collect and store lists of already cracked or stolen passwords and usernames connected to them and then go through dozens of other websites to see if they can obtain access to other accounts of the same user. Thus if the person utilises the same password for various social networks, apps and websites, he or she allows the attacker to get every bit of private information contained on each account.

6. Botnets — this type of attack can be combined with any of the above. The main point of a botnet attack is to use extra computational resources to attack the victim. This way, hackers manage to avoid the costs and difficulties associated with running programs on their own systems by exploiting hijacked machines to carry out the brute force attack. Furthermore, the usage of botnets provides an additional layer of anonymity which is also desirable for hackers.

Brute force attack tools

It can take a long time to crack the password of somebody’s email or website, so hackers have created some software to assist them in breaching accounts, which makes the process easier and faster.

1. Aircack-ng is a toolset that provides the hacker with the opportunity to enter various Wi-Fi security systems. They’ll be able to monitor and export data through the use of this software. They can even hack companies using techniques such as spoofing access points and packet injection. Such software is free and can be acquired by anyone.

2. DaveGrohl is a brute-forcing tool that was made to assist in dictionary attacks. It offers a mode that helps hackers to attack a victim using the force of several computers.

3. John the Ripper is a program made for recovering passwords. It supports thousands of encryption systems including those used in macOS, Unix, Windows, various web applications, network traffic, and document files.

These programs can quickly go through all conceivable combinations and choose the correct one to breach a variety of computer protocols, encrypted information storage systems and modems.

Examples of brute force attacks:

Brute force attacks are so common that almost every person or organisation has at least once fallen victim. Even worldwide organisations that are known for their robust security systems could be exposed to a brute force attack. For example, in 2018, it was uncovered that Firefox’s master password was quite easy to figure out. Because of this, nobody knows how much personal data was actually leaked into the network. This wasn’t the only brute force attack to occur that year. Unknown hackers compromised the accounts of numerous members of the Parliament of Northern Ireland.

Three years before that, Dunkin Donuts, a doughnut and coffee franchise, became a victim of another brute force attack that resulted in people losing large quantities of money due to a breach that took place in the company’s mobile app. Cybercriminals utilised brute force to obtain illegal access to the credentials of more than 19 thousand people, eventually taking their money. Unfortunately, the company didn’t make users aware of the attack and people couldn’t take the appropriate precautions to protect their personal data and money in the future, so a complaint was eventually filed against it.

Despite most people actually being aware of the measures required for privacy maintenance, a lot of users still disregard the rules of cybersecurity by trying to simplify access to their accounts with a simple, reusable and easily memorable combination. This way, they make themselves potential victims of brute force attacks, which are largely possible thanks to the carelessness of cyber-civilians.

The brute force attack: definition and examples

Apr 14, 2022 — 4 min read

Hacking attacks are often pretty minor and can include things such as personal data theft for the purpose of extortion. These attacks usually fail, but can sometimes be really devastating when the subject is a business or government organisation; we’re talking huge monetary losses. There are a huge variety of applications and software that were created to protect users from theft of data that is kept on their electronic devices or online accounts, and one of them is a firewall.

What is it?

A firewall is a network security system that filters incoming and outgoing network traffic according to the firewall's rules settled by a user. The main goal of the firewall is to minimise or just to remove undesired network contacts and let in all admissible communication. Firewalls are a significant part of cybersecurity and can be used in parallel with other tools to provide a high level of personal data security.

Firewalls are useful for every Internet user, no matter if we’re talking about a company or an individual user. Hackers can attempt to exploit every computer connected to the network in their malicious purchases. Your computer might be hacked in order to spread damaging links through your email or social networks accounts. That’s why you should be aware of how to protect yourself from breaches.

A firewall can be built into an operating system; an ordinary user usually doesn’t notice the work of this protective system until it reveals a threat in an incoming file.  

What is inside it?

A firewall could be a type of software or hardware that looks for threatening elements that have been flagged as dangerous. Hardware options for intercepting traffic travelling between the broadband router and user devices are frequently included in routers. Software options are programs that track the data that enters and exits your computer. Regardless of the type, the main function of the firewall is to block the access or dispatching of data packets that have been identified as a safety risk. This reversible security system not only protects your computer from attacks but also prevents the spread of malicious data through the network. It can also be used to block unwanted applications.

For example, if an employer wants to control the work of its employees, they’re able to set the system to block certain apps such as Skype, gaming apps, or social networks. Also, firewalls provide an opportunity to block unwanted advertising while controlling access to traffic. A good example of this function can be found when connecting to certain kinds of public WiFi, specifically when you should enter your phone number or email address to get access.

Now let's take a look at the three most common types of firewalls: packet filtering, proxy service, stateful inspection and NGFW. Of course, they are quite different in how they monitor and filter network communication.

Packet filtering

Packet filtering, also known as a ‘stateless’ firewall, analyses particular packets (little data units) one at a time using a set of filters. Thus, the system finds out and stops packets that contain danger while allowing the rest to proceed to their destination. This type of firewall is based on the filtration of unique headers of the packets — the main part that contains basic data like the origin of the packet and its destination.

Proxy service

An application firewall (or proxy service) is quite an efficient tool. It acts as a cyber agent rather than a filtering system. They safeguard your network location from any malicious actors by effectively producing a ‘mirror’ of the computer behind the firewall that excludes the possibility of direct communication between your computer and the information itself. But this type has some flaws. Primarily, it works slower than firewalls of other types and generally has application limitations.

Stateful inspection

Stateful firewalls are more versatile than the previous types as they are able to evaluate not only the headers of the packets but also the variety of aspects that each data lot contains. To pass the firewall, the incoming packet must correspond to the trusted information defined by the firewall rules. This type of firewall is the most streamlined and provides efficient security.


The last genus of firewalls is also attributed to application firewalls, although developers usually call them next-generation firewalls (NGFW). This type includes the best of the previous generations of firewalls and also includes a more comprehensive double-checking of packets. It prevents system mistakes in detecting potentially threatening traffic. The newest firewalls are also equipped with extra security systems like VPNs, IPSs (intrusion prevention systems), and identification control systems. This allows users to utilise just one app or kind of software for multi-layer security.

By using firewalls, you mitigate the risks of data theft and the unauthorised entry into your personal devices and accounts. These systems provide safety by establishing protective filters inside every part of your network to detect and eliminate threats like backdoors, viruses, remote logins, or spam.

Also, firewalls vary depending on their appointment. Some of them were created for personal or small-group (under 100 people) use. A good example is a firewall that is built into the Windows operating system. Others have been developed for companies with 1,000 or more employees. These ones should be attentively set up and are usually controlled by an in-house company programmer.

Now, let's talk a bit more about system-based firewalls. As previously stated, traffic that passes through a firewall is compared with the firewall's rules to figure out whether it can be accepted into the system or not. The program contains a list of rules which is consistently compared with incoming data packets. Every rule begins with an index word: accept, drop or reject. The system acts according to those words and the following directions. If the packet satisfies the direction indexed with ‘accept’, it’ll be accepted. Indeed, if it satisfies the ‘reject’ direction, that traffic will be blocked with an ‘unreachable’ error tag. Also, the ‘drop’ means that the traffic will be blocked without sending a reply.

That is, of course, just a ‘tap on the water’ explanation of what a firewall is. For a deep dive, visit

What is a firewall and how does it work?

Apr 14, 2022 — 4 min read

Almost every user of the internet faces hacking at least once. Cybercriminals have invented a wide range of techniques that aim to crack your password or harvest the credentials and personal data including financial and banking details that are stored on your computer. The timely detection of break-ins may save your privacy and protect your data from being compromised. That’s why you should always be aware of the signs of being hacked and the actions to be taken if you do find yourself in such a situation.

To begin with, it’s important to confirm whether you’ve been hacked or not. Nowadays, the Internet provides multiple ways to do it:

1. Information about almost every cyberattack is available on the Internet. Various news sources publish alerts about data breaches that could be an early indicator that your account has been compromised and that you should take action. Indeed, it’s important to make sure that you have at least one reliable source which provides information about cyberattacks.

2. There is a range of websites available that provide information about every hacked account. One of them is “Have I Been Pwned” which contains a wide database of records about cyberattacks and compromised accounts. To check the security of your account, you should just enter your email address and the system will tell you if your data was the subject of a hacker attack. A positive result doesn’t guarantee that your privacy was violated by cybercriminals, but it does mean that some of your login credentials may have become publicly available.

3. Another powerful tool for ensuring the security of your data is Dehashed.

It's a more comprehensive program than the previous one. Just enter any keyword connected to your account like an old username and Dehashed will surf the Internet to check whether that nugget of your private information has been leaked to the public.

4. Most official public apps and sites alert users about access attempts by sending emails or text messages. So, if you get suspicious alerts that someone has accessed your account from a peculiar IP address or at a strange time, check your VPN to make sure it wasn’t you, and if it wasn’t, there’s a high likelihood that you’ve been hacked.

5. To mitigate the risks of being hacked, check the security of sites that you use regularly. There are a lot of tools that can help you to do that. One of the best is the Sucuri Site Checker. To find out if one of your daily used sites was attacked or endangered your private data, you should enter the website link and the program will provide you with information about any threat that the site could pose. If you reveal that the website you use every day is risky, it would be better to change the password of any account connected to that site and also check the safety of your accounts with the tools aforementioned.

It’s also worth checking for suspicious activity on your account manually. Many sites and apps provide the possibility of tracing recent activity and it’s a wonderful way to find out if anyone else has been using your account. This feature gives information about where, when, and from what device you logged in. Frequenting this setting is the thin end of the wedge of successfully protecting your data; it ultimately prevents any undetected interference. If you have some active sessions opened in an unknown place or from an unknown device, you’ll be in the know. Such an issue can simply be solved by changing the password and logging out all existing sessions. Besides, it's better to protect your account by activating two-factor authentication in addition to changing the main password.

What should you do next?

First of all, you should notify your friends, colleagues, and other people you contact through the hacked account to make sure that they don’t fall prey to the activities of your hacker. Most shallow cybercriminals use your account to prolong the hacking chain. They pretend to be you and send messages or emails asking about material help for an ill grandma or for a phone number to send an SMS with a malicious link, pretending that it is needed to recover access to a locked account of theirs. Nowadays, almost everyone knows about such criminal ploys, but elderly people or loved ones often still fall victim. Thus, it’s necessary to alert everybody that someone else is using your account for the purposes of scamming.

The next significant recommendation is not to pay the ransom. If you received a message from the scammer telling you that you have to pay for the safety of your data, just don’t respond and you won’t have to pay. The main reason why hackers breach your account is to get money. If he understands that you won’t pay, they’re more often than not going to leave you alone. But, if you’re still worried about the security of your data, you should contact your local law enforcement agency. Another reason why you mustn’t pay the ransom is that you indirectly promote the hackers' actions; you’re investing in their endeavour and stimulating their activity. Moreover, they might see you as a cash cow and try even harder to extort you in the future.

The next course of action in taking control over your account is changing your password. Indeed, use your noggin to figure out which sites may use the same password, and change those passwords too; it goes without saying that you shouldn’t reuse the same password on multiple websites. It’s also worth using a password generator to acquire new passwords, consisting of sophisticated alpha-numerical combinations that are naturally hard to guess.

In some instances, cybercriminals will anticipate these steps if they feel that their cover is blown, and they’ll beat you to changing your password. In this situation, you should contact the site’s support team to prove that you’re the true owner of the account. Sites such as Facebook will ask you to upload a proof of ID, for example. To prevent hacking from happening in the future, it’s worth taking advantage of the ability to link your email address and phone number to your account.

Recovering access to the account is time-consuming, as all the information ought to be checked manually. So, if you get in such trouble and find yourself waiting to get back your account, take some time to reflect on getting yourself a good-quality password manager that will eliminate the burden in the future. Hint hint.

How to check if you’ve been hacked & the next steps

Mar 31, 2022 — 5 min read

Which words pop into your head when creating a password for your new account on a website or on a social network? Safety? Privacy? Well, there’s some bad news for you here — in our digital world, hackers are clued-up on hacking any kind of password that you can think into existence, and as a matter of fact, it’s a global problem. Users of the internet can never be sure that their accounts are protected enough to prevent data theft. Even global organizations such as Facebook can be the subject of cyber-attacks. And we mention the social giant for good reason too — in March 2020, the British company Comparitech stated that the data of more than 267 million people was leaked.

Ergo, it’s of paramount importance to know which techniques cybercriminals use to hack your password and steal your private information. There are a great number of methods that hackers can use to deceive people in order to steal private credentials and data. That’s why, today, we’re going through the most common techniques that can be used, so you’ll be in the know and much more secure online as a result.

1. Phishing

The easiest and most common way of hacking someone’s password is phishing. There are plenty of techniques here: phishing can take the form of an email, an SMS, a direct message on a social media platform, or a public post on a website. Cybercriminals spread a link or attachment that hooks an internet user in. Pushing leads a victim to a fake log-in page where he or she has to enter their data. After hacking, the hackers get a variety of data that can be used for any purpose. This way, people get their sensitive information served on a silver platter. As this technique is one of the oldest ones in the book, most users are aware of such a ploy. Almost everyone knows that following a suspicious link on the internet is a sure way of compromising yourself. Indeed, that’s why emails from unknown addresses tend to fall straight into the spam box and we’re used to blocking unknown numbers.

2. Social engineering

This type of cyberattack is based on the mistakes and imprudence that come as standard with the human brain. A criminal tricks the victim by acting like he or she is a real agent of an official company. It might be a fake call from your bank or some kind of technical support branch. You’ll likely be asked to provide confidential data so that the ‘agent’ may investigate ‘suspicious activity’ on your bank account. Usually, social engineering is mostly successful in manipulating pensioners due to their often dull mental blade and trusting nature. This technique is quite widespread and is much easier than creating an entire fake website to phish someone’s password.

3. Brute force attack

Brute force attacks are best characterized by the long, heavy method of checking each possible password variant. This way is really time-consuming, so most hackers use special software to automate the process. Most of the time, such attacks are based on knowledge gained from previous cracks as users often reuse their passwords on multiple websites and platforms. Also, cybercriminals might try lists of common variations of letters and numbers. That’s why, to protect yourself from such attacks, you should use as many symbols as possible and create passwords from unconnected words and unpredictable alpha-numerical compilations. Alternatively, you could use a password manager to automate this struggle (nudge nudge).

4. Dictionary attack

The dictionary attack partly resembles the previous method (brute force attack), the main idea of such a cyber attack is to submit all possible password variations by taking words from the dictionary. It makes the process of researching the right combination easier due to the strict structure of the dictionary. Moreover, it takes less time to crack the password If the hacker knows some sensitive information about the victim, like the name of their child, pet, or favorite color, for instance. Indeed, predictable human nature is the reason why this is such an effective method. To eliminate the possibility of such a cyberattack, it’s worth mixing semantically unconnected words, numerals, and other symbols. The best way, of course, is to get a password manager (nudge nudge).

5. Rainbow table attack

Passwords stored on the victim’s computer are usually encrypted. The plain text is replaced by various strings (hashes) to prevent data leaks. This method is named ‘hashing’. However, this method doesn’t guarantee that the password won’t be cracked; hackers are very familiar with such multi-layer security. The ‘rainbow table’ is a list of passwords and their hashes that have already been acquired through previous attacks. Hackers try to decrypt hashes by figuring out the correct combination based on different variations from the rainbow table. As a result, the password’s code may be retrieved from the database, removing the necessity to hack it. A good way to mitigate the risks of such an attack is to use software that includes randomly generated data in the password before hashing it.

6. Spidering

Many companies base their passwords on the names of the products they produce to help their staff remember the credentials that they need to access corporate accounts. Spidering is a type of cyberattack that uses this information to hack the company’s system and exploit the obtained information for malicious purposes. They surf the sites of organizations and learn about their businesses. Then, this knowledge is used to make a list of keywords that can be exploited in brute force attacks. As this process is quite time-consuming, experienced hackers utilize automatic software such as the infamous ‘web crawler’.

7. Malware

Malware is a harmful kind of software created to steal private information from the computer that it has been installed on. The victim gives access to his or her computer by clicking on a link specially made by cybercriminals. While this technique has various forms, the most common are keyloggers and screen scrapers that take a video of a user's screen or screenshots when passwords are being entered. They then send this data to the hacker. Some kinds of malware can encrypt a system’s data and prevent users from accessing certain programs. Others can look through users’ data to find a password dictionary that can be used in a variety of ways.

The amount of techniques being used by hackers to crack our passwords is increasing exponentially. The more ways there are to prevent break-ins, the more work hackers ought to do to get around them. That’s why, you should leave it to us, Passwork, your neighborly password managing wizards, to lift the burden from your shoulders.

Password-cracking techniques used by hackers

Mar 25, 2022 — 5 min read

If you've heard of ‘SHA’ in various forms but aren't sure what it stands for or why it's essential — you’re in luck! We'll attempt to shed some light on the family of cryptographic hash algorithms today.

But, before we get into SHA, let's go over what a hash function is and how it works. Before you can comprehend what SHA-1 and SHA-2 are, you must first grasp these principles.

Let's get started.

What Is a Hash Function?

A hash function relates to a set of characters (known as a key) of a certain length. The hash value is a representation of the original string of characters, however, it is usually smaller.

Because the shorter hash value is simpler to search for than the lengthier text, hashing is used for indexing and finding things in databases. Encryption employs hashing as well.

SHA-1, SHA-2, SHA-256… What’s this all about?

There are three types of secure hash algorithms: SHA-1, SHA-2, and SHA-256. The initial iteration of the algorithm was SHA-1, which was followed by SHA-2, an updated and better version of the first. The SHA-2 method produces a plethora of bit-length variables, which are referred to as SHA-256. Simply put, if you see “SHA-2,” “SHA-256” or “SHA-256 bit,” those names are referring to the same thing.

The NIST's Formal Acceptance

FIPS 180-4, published by the National Institute of Standards and Technology, officially defines the SHA-256 standard. Moreover, a set of test vectors is included with standardization and formalization to confirm that developers have correctly implemented the method.

Let’s break down the algorithm and how it works:

1. Append padding bits

The first step in our hashing process is to add bits to our original message to make it the same length as the standard length needed for the hash function. To accomplish so, we begin by adding a few details to the message we already have. The amount of bits we add is determined so that the message's length is precisely 64 bits less than a multiple of 512 after these bits are added. This can be expressed mathematically in the following way:

n x 512 = M + P + 64

M is the original message's length.
P stands for padded bits.

2. Append length bits

Now that we've added our padding bits to the original message, we can go ahead and add our length bits, which are equal to 64 bits, to make the whole message an exact multiple of 512.

We know we need to add 64 extra bits, so we'll compute them by multiplying the modulo of the original message (the one without the padding) by 232. We add those lengths to the padded bits in the message and get the complete message block, which must be a multiple of 512.

3. Initialize the buffers

We now have our message block, on which we will begin our calculations in order to determine the final hash. Before we get started, I want to point out that we'll need certain default settings to get started with the steps we'll be taking.

a = 0x6a09e667
b = 0xbb67ae85
c = 0x3c6ef372
d = 0xa54ff53a
e = 0x510e527f
f = 0x9b05688c
g = 0x1f83d9ab
h = 0x5be0cd19

Keep these principles in the back of your mind for now; all will fit together in the following phase. There are a further 64 variables to remember, which will operate as keys and are symbolized by the letter 'k.'

Let's go on to the portion where we calculate the hash using these data.

4. Compression Function

As a result, here is where the majority of the hashing algorithm is found. The whole message block, which is 'n x 512' bits long, is broken into 'n' chunks of 512 bits, each of which is then put through 64 rounds of operations, with the result being provided as input for the next round of operations.

The 64 rounds of operation conducted on a 512-bit message are plainly visible in the figure above. We can see that we send in two inputs: W(i) and K(i). During the first 16 rounds, we further break down the 512-bit message into 16 pieces, each consisting of 32 bits. Indeed, we must compute the value for W(i) at each step.

W(i) = Wⁱ⁻¹⁶ + σ⁰ + Wⁱ⁻⁷ + σ¹
σ⁰ = (Wⁱ⁻¹⁵ ROTR⁷(x)) XOR (Wⁱ⁻¹⁵ ROTR¹⁸(x)) XOR (Wⁱ⁻¹⁵ SHR³(x))
σ¹ = (Wⁱ⁻² ROTR¹⁷(x)) XOR (Wⁱ⁻² ROTR¹⁹(x)) XOR (Wⁱ⁻² SHR¹⁰(x))
ROTRⁿ(x) = Circular right rotation of 'x' by 'n' bits
SHRⁿ(x)  = Circular right shift of 'x' by 'n' bits

5. Output

Every round's output is used as an input for the next round, and so on until just the final bits of the message are left, at which point the result of the last round for the nth portion of the message block will give us the result, i.e. the hash for the whole message. The output has a length of 256 bits.


In a nutshell, the whole principle behind SHA would sound something like this:

We determine the length of the message to be hashed, then add a few bits to it, beginning with '1' and continuing with '0' and then ‘1’ again until the message length is precisely 64 bits less than a multiple of 512. By multiplying the modulo of the original message by 232, we may add the remaining 64 bits. The complete message block may be represented as 'n x 512' bits after the remaining bits are added. Now, we split each of these 512 bits into 16 pieces, each of 32 bits, using the compression function, which consists of 64 rounds of operations. For the first 16 rounds, these 16 sections, each of 32 bits, operate as input, and for the next 48 rounds, we have a technique to compute the W(i). We also include preset buffer settings and 'k' values for each of the 64 rounds. We can now begin computing hashes since we have all of the necessary numbers and formulae. The hashing procedure is then repeated 64 times, with the result of the i round serving as the input for the i+1 round. As a result, the output of the 64th operation of the nth round will be the output, which is the hash of the whole message.

The SHA-256 hashing algorithm is now one of the most extensively used hashing algorithms since it has yet to be cracked and the hashes are generated rapidly when compared to other safe hashes such as the SHA-512. It is well-established, but the industry is working to gradually transition to SHA-512, which is more secure, since experts believe SHA-256 may become susceptible to hacking in the near future.

How SHA-256 works

Mar 23, 2022 — 5 min read

According to a survey conducted in February 2021, 46% of participants stated that on average, they spent five to six hours on their phone on a daily basis.

That means for almost half of you, a quarter of your life’s security will be dictated by your choice of mobile platform. However, how safe are these popular phone platforms? Because mobile devices have grown to be so important and pervasive in people's lives, they have piqued the interest of criminal hackers looking to steal your personal information.

The technology itself is always advancing, and that’s why we’re not looking to compare specific Android or iOS versions today, but rather the core principles and philosophy behind Apple and ‘the rest’ — which importantly, have consequences in terms of privacy and security.

Let’s start with the most common threat.

App Control

Usually, when it comes to installing an app, there is only one common method to do it — via a specific Store — for example, Google Play or the AppStore. On both platforms, the uploaded application will next go through an app review procedure to verify that it is not dangerous and does not breach any developer policies.

These rules are designed to guarantee that the app's content is suitable, that it doesn't copy other applications or people, that it follows monetization standards, and that it meets the minimum functionality criteria (it should not crash all the time, and it should respect the user experience, for instance).

The problem is that employees tasked with determining whether applications meet particular requirements may be unaware of what the app actually does with personal data. The number of Android and iOS applications (as well as their creators) is constantly growing, and as a consequence, corporations have had to recruit more reviewers in recent years.

And we all know what happens when a firm adds thousands of people all at once: the learning management system becomes difficult to scale, and not all employees are effectively onboarded.

As far as we can tell, the greatest difference in approach is that Apple has actual people checking each app 100 percent of the time, while Google attempts to automate this process as much as possible — and it consistently causes difficulties for them.

According to a report issued in November 2020 by the NortonLifeLock Research Group, between 10% and 24% of 34 million apps scattered over 12 million Android devices might be classified as harmful or possibly undesired apps, depending on your classifications.

Of those applications, 67% were installed from the Google Play Store. The researchers mention that:

"The Play market is the main app distribution vector responsible for 87% of all installs and 67% of unwanted installs”

So, if you’re a person that loads tons of apps while searching for “the perfect one” — consider deleting the underdogs — the fewer apps you have on your phone, the better.

Permission control and telemetry

The most serious danger to your mobile security comes from apps that request too many access permissions and subsequently leak your information.

While the app store is mostly responsible for filtering out malware riffraff that affects a disproportionate number of Android users, iPhone users are not immune to assaults.

And what we mean is that while most iOS users believe they are secure, they are not. First and foremost, when an app gains access to, say, 'All photos,' few users realize that the app may load all of your images in the background, use machine learning to find NSFW content, and discreetly submit it all to a server. Moreover, you won't get the cool camera access dot appearing if the app does that.

Furthermore, even if you disable all of the app's permissions, the app may still gather and monitor a range of data. Every app can monitor 29 highly detailed data points about your iPhone, according to an examination by researchers at privacy software firm Lockdown and The Washington Post, including your IP address, free storage, your current volume level (to 3 decimal points), and even your battery status (to 15 decimal points).

But what about Android?

Well, we have bad news for its consumers as well: according to a study undertaken by Douglas Leith of Trinity College Dublin, Google gathers more than twenty times the amount of data from a typical Android device than Apple does from an iPhone.

This observation remains true even when a user has specifically opted out of telemetry collection. Every 4.5 minutes, both Android and iOS devices send data to Google and Apple's servers, and there's nothing you can do about it as a user.

According to these researchers, smartphones with default privacy settings communicate information such as the IMEI, SIM serial number, phone number, hardware serial number, location, cookies, local IP address, neighboring Wi-Fi MAC addresses, and even the advertising ID.

Both companies, by the way, disagree with the results, claiming that they just expose what is required to keep phones functioning properly.


Keeping your phone's operating system up to date is the simplest approach to keeping it safe. Updates aid in the mitigation of software vulnerabilities, which are a kind of security flaw detected in an operating system. Hackers make use of this flaw by building code that targets a particular vulnerability, which is often packaged as malware. Simply visiting a website, reading a compromised email, or playing malicious media might infect your smartphone. This is what occurred when the bank credentials of 300,000 Android users were exposed by regular applications on the Google Play store.

When it comes to transmitting upgrades to your palm, Apple still has the manufacturing infrastructure, carrier network contracts, and underlying programming in place to make it happen quickly and painlessly. While some consumers continue to complain about iOS' famed lack of customization, Apple's well-policed walled garden has also ensured that iPhone users are essentially impervious to viruses without even realizing it.

Google, on the other hand, still can’t fix the Android update problem.

Because each Android smartphone has its own hardware, when Google pushes an update, it may take up to a year for other smartphone makers to upgrade their devices, and that's only if they intend to do so. Other Android phones, apart from the Google Pixel series, seldom get all upgrades for an extended period of time — and there are various reasons for this. The first factor to evaluate is the number of models available from each manufacturer. Apple only adds around four iPhones to its portfolio per year, so the total number of iOS devices it needs to support is quite modest when compared to that of android — which is why the 7-year-old iPhone 6s is still getting the latest upgrades in 2022.


The most significant distinction between iOS and Android in terms of security and safety is their ideology. Because iOS is a closed ecosystem, it is entirely under Apple's control when it comes to security. The reason is that, as far as we know, Apple does not gain profit from advertisements (apart from program advertisements in the AppStore), hence it is not interested in gathering and selling your data to third parties.

Google, on the other hand, earns the majority of its revenue from advertisements, which implies that its success is dependent on its ability to target its adverts as precisely as possible. Even though Android is a free and open-source operating system, the Google Play Services that gather data are not.

In the end, Android has a lower degree of security out of the box, but custom Android versions may give a high level of protection.

How secure are iOS and Android, really?

Mar 9, 2022 — 4 min read

Why do you actually need a VPN?

Virtual Private Networks (VPNs) encrypt your data and hide your online activity from third parties, allowing you to surf the web anonymously.

Web servers collect information about your computer's IP address and other information about your browsing history when you visit a website that is hosted on their servers. Your data is scrambled and far more difficult for third parties to monitor when you use a VPN that first connects to a private server.

Consumer VPNs are mostly used for anonymous web surfing. Some people can also use a VPN at home to connect to computers and files on your local network from a different place.

What are the possibilities of using a VPN?

Because a VPN alters your Internet Protocol (IP) address, it may perform a wide range of functions. When a computer is linked to the internet, it has a unique IP address that notifies other computers where it is situated. When you use a VPN, you first connect to a remote computer (a server) to fool other computers into thinking you're in a different place. When using a virtual private network, it is possible to choose a fake location for yourself.

PrivacyJournal offers reviews of the best VPN providers out there.

There are many options available to you when you get a new IP address. The material available on streaming services like Spotify and Netflix, for example, can change. Using a VPN allows you to access streaming libraries in other countries.

A VPN may also be used to circumvent censorship. A practice known as geo-blocking may be used by certain government agencies to ban websites and services in certain regions or territories. Through the use of a VPN, you may cultivate the illusion that you’re in a different place, masking your real IP address and as a result, accessing restricted media.

In terms of the ‘dark side’, individuals use VPNs to download copyrighted material and engage in other illicit actions online as there is an obfuscation of responsibility.

Does a VPN provide ultimate privacy?

Encryption is a crucial component of a VPN. All you need to know about encryption is that it scrambles your data so that only the right key can decode it. We'll go into more detail about encryption in the following section. To put it another way, it's a deadbolt lock for your computer's hard drive.

Before it reaches the internet, all of your data travels via an encrypted tunnel, where it is inaccessible to everyone else. As a result, when you visit a website, your browser does not transmit any information along with it. Browsers carry a lot of data, like your time zone, language, operating system, and even your screen resolution.

Although none of this data directly identifies you, the full collection is likely unique to you and may be used to identify you via a technique known as browser fingerprinting. Government authorities, marketers and hackers may use this information against you.

A VPN conceals all of your browser information, as well as your browsing history. While you’re connected, no one, even your internet service provider, can tell what you’re doing online.

A VPN isn’t a one-stop-shop for internet privacy, however. Anything you do while connected to the internet is fair game, including websites you log into and services you utilize. Many browsers utilize an account to move information like your browsing history and cookies between devices. This data isn’t safeguarded by your VPN tunnel, either.

How does it work?

VPNs provide an additional layer of protection for your online activity. As previously stated, using a private VPN server enables you to mask your IP address and make it look as if you're connected to the open internet via a different location.

All of this is possible because of VPN protocols, which are used by VPN service providers. VPN protocols are simply a set of instructions for your computer to follow while connecting to a server. The protocol also specifies encryption requirements in addition to a ‘how to’ of setting up and managing your connection.

Encryption is a major reason to use a VPN. All but a small percentage of web surfing now takes place in a secure environment. Despite the fact that you're using an encrypted connection, your personal information is still being sent.

Think of your internet connection as a passageway through which you move information. In order to keep your online activities private, this tunnel is protected by a layer of encryption. Whenever you connect to your Twitter account, for example, you're doing it over a secure tunnel that only you and Twitter can see.

As with a VPN, this is the case. Instead of directly connecting to the internet, your data is routed via a VPN server, where it is encrypted and rendered anonymous. The AES cipher with a 256-bit key is used by most VPN services. AES is a widely used block cipher for encrypting and decrypting data.

By establishing an encrypted connection, the VPN server verifies that you are indeed connected to a certain private network. Data and browser history are then shielded from prying eyes outside the tunnel and never leave it.

To summarize, a VPN creates an encrypted path for your data to travel through on its way to and from the VPN server. In most cases, there’s no way for anybody to know who you are or where you’re from when connected to a VPN server.

Is it a panacea?

When it comes to the effectiveness of VPNs, there's no secret sauce. A renowned VPN service like NordVPN or TorGuard is all you need to ensure that your VPN works. Individual product evaluations are of course necessary.

There's a short test you can do to determine whether your VPN connection is functioning. For free, and provide tools for checking your IP address, DNS queries, and WebRTC data (basically, everything a VPN should, in theory, obfuscate). Verify that the information is different when your VPN is active. As long as it is, your VPN is running as it should.

What is a VPN?

Feb 25, 2022 — 5 min read

Of course, losing access to your Google or Gmail account is going to be upsetting. If you've forgotten your password, or if someone has hacked into your account and changed it, Google provides a list of actions that you may take to regain access to your account. Indeed, they may come in handy at times, but the methods of password recovery for Google accounts tend to change from time to time and relying on them as a fallback is never a good idea.

Not only have we provided all the necessary links in the “Password recovery” section down below for those who have lost access to certain accounts, but we’ll today be focusing on what can be done to ensure you never lose access to your account again. Here are some things to consider:

Regularly backup your data

If you have a current backup of your data, it will be less of a blow if you ever lose access to your account. Takeout is the name Google has given to the feature that allows you to download your data. You may download all of the data from all of your Google applications, or just part of the data from some of them. You might even decide to download the data from  a single app, such as Gmail, from your Google account.

For each sort of data, the download formats are different. For example, MBOX files may be imported to Gmail or most other email services and applications.

Keep your old passwords

Keep a copy of your old passwords in case you forget your current one. Google uses this method to verify your identity if you ever lose your password. In the event that you haven't updated your password in a while, you may not be able to recall your old password. It's a good idea to maintain a copy of your previous Google passwords in a secure place when you change your password.

When using a password manager such as Passwork, you can keep track of your previous passwords. Because of that, we strongly recommend using one. When you establish a new password on an app or website, most password managers only allow you to update the current entry; however, with a password manager, you may create a new password and then go back and change the name of the old one to something like "Gmail — old password". By the way, this is also a problem with Apple Keychain — when you change your password, it asks whether you would like to update your old password. You’ll obviously press “Update”, and bam, your previous password is lost in the void. So keep an eye on that.

Why is this important? Well, as we’ve hinted at, Google asks you to enter the previous password in some cases as a fallback plan.

Fill in the recovery info

Google provides you with many ways to recover your password:

  1. Go to your Google account and choose "Security" from the left-hand column
  2. Scroll all the way down to "Ways that we can verify that it’s you"
  3. Fill them in

Now, Google will use those options to recover your password when needed, or just to verify it’s you when weird login behaviour is detected. Among all the options, the ‘Recovery phone’ is the most convenient one — trust me, you’ll forget that ‘Security Question’ in just a few days. ‘Recovery email’, to be honest, isn't secure enough — we, Earthlings, tend to use weak passwords, so your account might be compromised if a hacker manages to guess your ‘NicknameDateOFBirth’ password.

Remember the day you registered

If everything else fails, Google may ask you to provide an estimated date of when you created the account. The best way to get this date is by searching for a Gmail welcome email.

To locate the welcome email, go to the ‘All Mail’ folder on your computer (to see it, you may need to click ‘More’ to expand the folders). You may also hover your cursor over the page information in the upper right-hand corner and choose ‘Oldest’.

This will move the email you received first to the top of the list. If, on the other hand, you imported non-Gmail emails into your inbox from before 2004, the welcome email will not appear at the top of the inbox hierarchy. Also, if you haven’t imported all of your emails, you’ll encounter some problems.

The email may also be found by searching for "welcome," "Gmail team," "," or "," among other similar words and phrases.

However, when I personally tried it, I couldn't find it. This is because I delete all the mail on my account once a year. For people like myself, there’s a weird hack — your POP settings might show the date on which you created your Gmail account.
To access them, click the gear icon in the top right-hand corner, select See all settings, then click Forwarding and POP/IMAP.

Look for the Status line in the POP download section. If you're fortunate, you'll come upon the following information:

Status: POP is enabled for all mail that has arrived since [Here is your date]”


If you’ve ever changed your POP settings, the date on which you created your Gmail account won’t be shown.

Password recovery

There’s only one place where you can recover your password — it’s this “Google Recovery” page. Everything else is likely phishing scams. The only other alternative option, in case of an adversary like losing your password, is the “Can’t sign into your Google Account” page.

Basically, you should follow the instructions on screen and pray to Google's mothership that hope shall be restored.

If your prayers haven’t been heard, and all pages cycle through a loop with a “Please try again” message, visit the “Tips to complete account recovery steps” page — it helped me several times to understand exactly what Google wants from me.

The last page you can visit, if everything else fails, is “Create a replacement Google Account”.


If you have important data stored on any cloud: Gmail, Google Drive, Docs, etc. — back them up using offline storage. Use two-factor authentication to always keep your mobile phone as a recovery option. Keep hold of your password change history and remember the date you registered your account.

I forgot my GMail password!

Feb 25, 2022 — 4 min read

Since the dawn of the internet, the world of chat programs has seen drastic transformations. Given that not that much time has passed since the creation of the first chat app — CompuServe's CB Simulator — the rate of progress and development is quite astounding.

Chat protocols and frameworks are the subject of this article, and we're going into great detail about everything from their history to their security issues. At Passwork, we’re also very interested in the security solutions that are intertwined with these chat protocols, so without further ado, let’s explore the most popular ones:

WebRTC — Web Real-Time Communication

Using WebRTC, you may participate in rich, real-time multimedia conversations. Most web-based video and audio chat systems need you to download third-party plugins or applications. As a result, most browsers already have WebRTC, which utilises APIs to link peers. As an open-source initiative developed by Google, WebRTC enables real-time communication in your web browser.

By default, WebRTC uses UDP (the User Datagram Protocol). However, if there is a firewall between the two devices, WebRTC is able to use TCP (the Transmission Control Protocol).

When it comes to WebRTC, it’s primarily used to provide an immersive multimedia experience without the requirement for third-party applications or services.

Direct and unmediated communication with your consumers is one of the main advantages of WebRTC. If you're looking for complete control over customer communication channels, this is the ideal solution for you. As an additional bonus, there is no rate limit or price adjustments due to the absence of a third-party provider.

To construct a chat solution using WebRTC, you'll need access to a developer who is familiar with the technology.

Discord, Google Hangouts, and Facebook Messenger are just a few of the numerous chat apps that make use of WebRTC.


WebRTC enforces or supports essential security practises in all major areas since it was, after all, created with security in mind. As a result, in addition to being made safe, it encourages WebRTC developers to take security seriously as well.

WebRTC is now considered by some to be one of the most secure VoIP technologies available as a consequence of a significant emphasis on secure communication. The basic idea of having encryption enabled by default is that a call is always private. Encryption and security are no longer regarded as optional features. To top it off, WebRTC is open source, making it a compelling and dependable platform on which developers may wish to construct their next product.

Of course, we anticipate a rise in the number of communication services that provide significantly enhanced security to their consumers in the near future. But, for the time being, WebRTC is one of the frontrunners.


WebSocket, one of the most widely used communication protocols, establishes a low-latency, near-real-time connection between the client and the server via the use of sockets. As it aims to achieve the lowest latency feasible, it is most often utilised for online gaming, chat apps and the real-time updating of social feeds, for example.

In contrast to a standard HTTP system, in which communication may only be started by the client, a WebSocket connection allows communication to take place both ways. The WebSocket API is supported by all current browsers, and data is transferred through a TCP port, similar to the HTTP protocol.

When a series of conversations around the inclusion of a TCP-based socket API in the HTML5 standard began in June 2008, WebSocket was one of the ideas that emerged.

WebSocket is a technology that is used by many websites that demand real-time updates. WebSocket is used by several large chat apps, such as Slack, as part of their technology stack.


WebSocket as a technology has some inherent security issues:

  • WebSocket permits an infinite number of connections with the target server, resulting in the server's resources being depleted as the result of a DOS attack;
  • WebSockets may be utilised over unencrypted TCP channels, which can expose sensitive data to serious defects like those described in the OWASP Top 10 A6-Sensitive Data Exposure list;
  • WebSockets are subject to malicious input data assaults, resulting in Cross-Site Scripting attacks (XSS).

XMPP: Extensible Messaging and Present Protocol

With the launch of Jabber in 1999, Jeremie Miller brought XMPP to the world.

XMPP is a protocol for sending XML (Extensible Markup Language) data over the internet. It facilitates real-time communications by improving the push method between clients. XMPP aims to create a network of linked devices that can communicate with one another using their own trusted servers.

It's a cross-platform protocol that can be used with the XMPP client on the Web, Android, and iOS platforms. The enormous number of open-source and free clients is one of XMPP's distinctive selling advantages, allowing for a very simple installation procedure.

Instant messaging, multi-party conversations, content dissemination, and multimedia streaming, such as audio and video calls, are all examples of the protocol’s applications. Many big chat programs, such as WhatsApp and Zoom, employ XMPP.

Google Talk, ICQ, and Cisco Meeting provide excellent illustrations of what can be accomplished when using an XMPP client.


XMPP has had its security vetted by the experts at the IETF, and so has native support for pluggable authentication (via SASL) and leading-edge security (via TLS).

Still, that doesn’t stop some hackers from finding vulnerabilities:

An unauthenticated, remote attacker might utilise a vulnerability in the Cisco Meeting Server software's Extensible Messaging and Presence Protocol (XMPP) capability to create a denial of service (DoS) scenario for users of XMPP conferencing apps. The flaw is caused by XMPP packets that haven't been properly validated. An attacker could take advantage of this flaw by sending specially crafted XMPP packets to a vulnerable device, or even by inducing process crashes and denial-of-service scenarios in XMPP conferencing apps — either way, it's not great for the users.

You can read about this exploit more here.

Comparison of instant messaging protocols

Feb 10, 2022 — 4 min read

If the concept of ‘quantum cryptography' sounds complicated to you, you're right. That’s why this ‘encryption tutorial for dummies’ shall demystify the concept and provide an explanation in layman’s terms.

Quantum cryptography, which has been around for a few decades, is becoming more and more important to our daily lives because of its ability to protect essential data in a manner that conventional encryption techniques cannot.

What is it?

Cryptography, as we all know, is a technique that aims to encrypt data by scrambling plain text so that only those with the appropriate ‘key’ can read it. By extension, quantum cryptography encrypts data and transmits it in an unhackable manner using the principles of quantum mechanics.

While such a concept seems straightforward, the intricacy resides in the quantum mechanics that underpin quantum cryptography. For example:

  • The particles that make up the cosmos are fundamentally unpredictable, and they may exist in several places or states of existence at the same time;
  • A quantum attribute cannot be measured without causing it to change or be disturbed;
  • Some quantum attributes of a particle can be cloned, but not the whole particle.

How does it work?

Theoretically, quantum cryptography operates by following a model that was first published in 1984.

Assume there are two people called Alice and Bob who want to communicate a message in a safe manner, according to the model of quantum cryptography. Alice sends Bob a key, which serves as the signal for the communication to begin. One of the most important components is a stream of photons that go in just one direction. Each photon corresponds to a single bit of data — either a 0 or a 1 — in the computer's memory. However, in addition to traveling in a straight path, these photons are oscillating, or vibrating, in a certain fashion as they move.

The photons pass via a polarizer before reaching Alice, the sender, who then commences the transmission. When some photons pass through a polarizer with the same vibrations as before, and when others pass through with different vibrations, the filter is said to be ‘polarized’. There are many polarization states to choose from, including vertical (1 bit), horizontal (0 bit), 45 degrees right (1 bit) and 45 degrees left (0 bit). In whatever system she employs, the broadcast has one of two polarizations, each encoding a single bit, which is either 0 or 1.

From the polarizer to the receiver, the photons are now traveling via optical fiber to Bob. Each photon is analyzed using a beam splitter, which determines the polarization of each photon. After receiving the photon key, Bob does not recognize the right polarization of the photons, so he chooses one polarization at random from a pool of available options. Alice now compares the polarizers Bob used to polarize the key and informs Bob of the polarizer she used to deliver each photon to the receiver. Bob checks to see whether he used the right polarizer at this point. The photons that were read with the incorrect splitter are then eliminated, and the sequence that is left is deemed the key sequence.

Let's pretend there is an eavesdropper present, who goes by the name of Eve. Eve seeks to listen in and has the same tools as Bob in order to do so successfully. However, Bob has the benefit of being able to converse with Alice in order to check which polarizer type was used for each photon, but Eve does not. Eve is ultimately responsible for rendering the final key.

Alice and Bob would also be aware if Eve was listening in on their conversation. After Eve observes the flow of photons, the photon locations that Alice and Bob anticipate to see will be altered as a result of her observations.

Well, that’s all pretty mind-blowing, but for us, the general public, the biggest question is…

Is it really used?

Although the model described above has not yet been fully developed, there have been successful implementations of it, including the following:

  • The University of Cambridge and the Toshiba Corporation collaborated to develop a high-bit-rate quantum key distribution system based on the BB84 quantum cryptography protocol;
  • DARPA's Quantum Network, which operated from 2002 to 2007, was a 10-node QKD (Quantum Key Distribution) network constructed by Boston University, Harvard University, and IBM Research. It was operated by the Defense Advanced Research Projects Agency;
  • Quantum Xchange created the first quantum network in the United States, which is comprised of over 1,000 kilometers of optical fiber;
  • The development of commercial QKD systems was also carried out by commercial businesses such as ID Quantique, Toshiba, Quintessence Labs, and MagiQ Technologies Inc.

As you can see, these rare implementations are pretty far from what you’d expect to use every day. But hopefully, that will change in the near future.

The pros and cons of quantum cryptography

As with any developing technology, the state of it now (2022), may be very different to its state in the future. Thus, the following table may change dramatically. We do believe, however, that we’ll see fewer points in the ‘Limitations’ column as the years go on.

The need for unbreakable encryption is right there staring us down. The development of quantum computers is on the horizon, and the security of encrypted data is now in jeopardy due to the threat of quantum computing. We are fortunate in that quantum cryptography, in the form of QKD, provides us with the answer we need to protect our information long into the future — all while adhering to the difficult laws of quantum physics.

What is quantum cryptography?

Feb 8, 2022 — 4 min read

I’d like you to reflect on your personal interactions when it comes to the internet. Consider the impact that the internet has had on society. Have these two things changed with time? Of course. Indeed, with more social media platforms and apps for mobile devices than ever before, we’ve yet another fundamental transition on the horizon…

The Web's Evolution

The web has developed a lot over the years, and its applications are nearly unrecognizable. Web 1.0, Web 2.0, and Web 3.0 are often used as benchmarks to describe the web's progression.

Web 1.0

Web 1.0 was the original web. Most participants were content consumers, while producers were mostly web developers who built websites with mostly text or graphic material. Web 1.0 ran from 1991 until 2004. Web 1.0 sites served static material rather than dynamic HTML. Sites had little to no interactivity and data was supplied through a static file system rather than a database. Web 1.0 is the ‘read-only’ web.

Web 2.0

Most of us have only used the web in its present incarnation (Web2). Web2 is the social and interactive web. You don't have to be a developer to create in the Web2 universe. Many applications are designed so that anybody may create. You can create and share a concept with the world. You can also post a video for millions of others to see, connect with your viewers, and comment on the video itself. Web2 is easy, and because of that, more and more individuals are becoming content creators. The web as it is now is fantastic in many aspects, but there are still several issues.

Privacy and security

Web2 applications are plagued by data breaches on an almost daily basis. If you want to know when your personal information has been leaked, there are websites devoted to keeping track of these incidents and alerting you.

Your data and how it is handled are completely out of your hands when it comes to Web2. When it comes to tracking and storing user data, many organizations do so without their customers' permission. The firms in charge of these platforms then possess and manage all of this data. Also at risk are users who reside in nations where exercising one's First Amendment rights might have unintended repercussions. Authorities often take down sites or confiscate funds if they suspect someone is disseminating information contrary to the official line. Governments can easily interfere, control, or shut down programs using centralized servers. By the same token, banks are digital and under centralized control — governments typically meddle in this area as well. During times of volatility, severe inflation, or other kinds of political instability, they have the ability to close bank accounts or restrict access to cash.

By starting from the bottom up, Web3 seeks to fix many of these flaws by reimagining how we design and interact with the internet and entities within it.

What exactly is Web 3.0?

Web2 and Web3 differ in a few ways, yet decentralization is a common theme in both. Web3 adds a few new features to the internet that we already use. It can be defined as the following:

  • Verifiable
  • Trustless
  • Self-governing
  • Permissionless
  • Distributed and robust
  • Stateful
  • With built-in payments

When working with Web3, programmers seldom create and deploy applications that rely on a single server or database (usually hosted on and managed by a single cloud provider).

Instead, Web3 applications either run on blockchains, decentralized networks of many peer to peer nodes (servers), or a combination of the two that forms a crypto-economic protocol.  Many people in the Web3 community refer to these applications as "dapps" (decentralized apps), a word that you’ll see swimming around quite often.

An incentive for network members (developers) to deliver the best service possible is a key component of a robust and secure decentralized network.

Web3 is often discussed in conjunction with cryptocurrencies. This is due to the fact that many of these protocols rely heavily on cryptocurrencies. Anyone who wishes to become involved in one of the projects is given tokens (a cash incentive) in exchange for their time and effort.

In the past, cloud providers offered a wide range of services, including computation, storage, bandwidth, identity, hosting, and other online services.

Participating in the protocol in a variety of ways, both technical and non-technical, might be a source of income.

The protocol is often paid for by users in the same way that a cloud service provider like AWS charges its customers today. In Web3, however, the money flows directly to the network members. The elimination of middlemen that are both unneeded and inefficient is a hallmark of this sort of decentralization.

There are utility tokens provided by several online infrastructure protocols including Filecoin, LivePeer, Arweave, and The Graph. Many tiers of the network are rewarded with these tokens. This is how even native blockchain systems like Ethereum work.

How Web3 Handles Identity and Privacy

Here, at Passwork, security is paramount. This is where, technically, Web3 shines the most. Identity is handled quite differently within Web3. The wallet address of the user engaging with the app is usually used to link identities in Web3 applications. This means that wallet addresses, unlike Web2 authentication methods like OAuth or email + password, are fully anonymous until the user wishes to publicly link their identity to it.

It is possible for a user to build up their reputation over time if they choose to use the same wallet for various decentralized applications (dapps).

Authentication and identification layers may be replaced with self-sovereign identity protocols and tools like Ceramic and IDX. An RFP for a "Sign in with Ethereum" standard is currently being worked on by the founders of Ethereum.


Web 3.0's set of capabilities has the potential to fundamentally alter the way we see and utilize the internet, giving people more agency, spawning new sectors, and enabling networks to operate without a centralized authority or single point of failure. It’s just a matter of time until Web 3.0 becomes the new global standard.

As far as answering the question raised in the title — on paper, Web3 should eliminate most of the privacy and security issues faced with Web2. In practice — this is still not yet certain.

What is Web3?

Jan 27, 2022 — 4 min read

Most of us have heard of torrents, and have likely also used torrents to download movies, books, music, TV shows, games, and so on. But, you’ve probably still got one question that remains unanswered — what are they? BitTorrent is well-known as a technology for piracy, although its genius isn’t limited to that. It's a useful, decentralized peer-to-peer protocol that outperforms other protocols in many different ways.

In this article, we’ll break it down to be as simple as possible while also explaining several terms that might pass you by every day.

Why do I need BitTorrent?

When you download data from the internet, most of the time, your computer connects to the webserver and immediately downloads the data from a certain server. This is how the cohort of online traffic operates. Even though we have fancy technologies to reroute this traffic through many different servers, there’s still a bottleneck in terms of the maximum speed that may be provided to the end-user. But what if you know someone who already has the file that you need, wouldn’t it be better to AirDrop it, instead of uploading it to a server, and then downloading it once again? Well, that’s what Bram Cohen created his BitTorrent protocol for.

How does BitTorrent work?

BitTorrent is effectively a peer-to-peer protocol, which means that computers in a BitTorrent ‘swarm’ (a collection of computers downloading and uploading the same torrent) exchange data without the use of a central server.

A machine traditionally joins a BitTorrent swarm by loading a .torrent file onto a BitTorrent client. The BitTorrent client communicates with a ‘tracker’ defined in the .torrent file. The tracker is a dedicated server that keeps track of the computers that are linked to it. The tracker exchanges IP addresses with other BitTorrent clients in the swarm, allowing them to communicate with one another. That’s exactly why it is called a ‘tracker’ — it tracks the participants of any given communicative endeavour.

Once connected, a BitTorrent client downloads tiny chunks of the torrent's contents, in fact, as much as it can. As soon as the BitTorrent’s client has any data downloaded, it may start uploading it to other BitTorrent clients in the swarm. As a result, everyone who downloads a torrent is also uploading the same torrent — all the pieces that a user has at the moment. This increases the download speed for everyone. When 10,000 users download the same file, the central server is not overburdened. Instead, each downloader shares their upload capacity with other downloaders, ensuring that the torrent remains as fast as possible.

Importantly, BitTorrent clients never download files directly from the tracker. The tracker simply participates in the torrent by keeping track of the BitTorrent clients that are connected to the swarm, not by downloading or uploading data.

‘Seeds’, ‘Peers’, and ‘Leeches’

If you're a torrent user, you've probably come across the words ‘seeds’, ‘peers’, and ‘leeches’. Let's take a look at what these terms actually mean.

The seed is the user who has already downloaded the entire file and is now sharing it with peers while not downloading any parts of the file from others. To download a torrent, one seeder – who has a full copy of all the files in the torrent — must first join the swarm so that other users may download the data. If a torrent has no seeders, it cannot be downloaded since no connected user possesses the entire file.

Leechers or peers are people who are simultaneously downloading and uploading — so, the vast majority, basically. In the BitTorrent world, people who give more — get more. In other words, the BitTorrent client prefers to send data to clients who contribute more upload bandwidth rather than sending data to clients who upload at a very slow speed.

This improves download rates for the whole swarm and compensates those who donate more upload bandwidth.

Moreover, we bet you encountered times when the file you wished to download via the torrent network wasn’t very popular, resulting in low speeds, or no download at all. This could have resulted, for example, because all the seed users were offline. It’s good to know that our ability, as highly-evolved primates, to create new technologies is limitless. Indeed, as a result, we did manage to mitigate the aforementioned, to an extent.

Apart from the .torrent file, you might be familiar with something called ‘magnet links’.

A magnet link includes a unique identifier, different data based on the torrent's exact nature, and, most significantly, a cryptographic hash of the torrent’s contents. And that’s really cool because you can search for different torrents that lead to the same file. Now, a magnet link uses that identifier to ‘attract’ all torrents that can help download those files, meaning more seeders, meaning more peers, meaning greater speeds and stability! All this fancy technology is done by what is called the DHT protocol, and according to its specification, "each peer effectively becomes a tracker." This implies that BitTorrent’s clients no longer require a centralized server to manage a swarm. Rather, BitTorrent evolves into a completely decentralized peer-to-peer file-sharing system.

Does this have applications outside of PirateBay?

Of course. Here are just some examples:

  • To deliver updates for games, including World of Warcraft, StarCraft II, and Diablo 3, Blizzard uses a special BitTorrent client. This enhances everyone's downloads by allowing users to share their upload bandwidth with others, utilizing idle bandwidth ensuring quicker downloads for everyone;
  • Wikileaks released files using BitTorrent, which relieved the tremendous strain on their systems;
  • BitTorrent is used by Linux distributions to help disseminate ISO disc images.

Is it safe?

A big problem is that virus-ridden pirated content makes up a concerning chunk of all available torrents nowadays; that’s a huge threat to your security for obvious reasons. However, the biggest problem with torrents is that all BitTorrent users may see the ISP addresses of all users as well as the data they send. And of course, copyright holders, the police, advertising organizations, and hackers frequently monitor this data. Because of this, it’s critical for torrent users to maintain total online security by utilizing internet security software and installing operating system updates as soon as they become available. Encrypting an internet connection and masking the IP address with a virtual private network (VPN) also helps, but not to a full extent. So the short answer is — it’s not very safe.

In the end, downloading via BitTorrent is worth it, most of the time, especially when it comes to big files. While lacking in security, BitTorrent is the Web2 version of Web3 that we’re all allowed to try right now, as long as it doesn’t infringe on copyright laws.

What is BitTorrent?

Jan 21, 2022 — 5 min read

Despite the fact that Wi-Fi is a trademark owned by the Wi-Fi Alliance, an organization committed to certifying that Wi-Fi equipment fulfills the IEEE's set of 802.11 wireless standards, the name ‘Wi-Fi’ is associated with wireless access in general nowadays.

These specifications, which include 802.11b (pronounced "Eight-O-Two-Eleven-Bee," omitting the "dot") and 802.11ac, are part of a family of specifications that began in the 1990s, which is still growing today. Improvements to wireless speed and range, as well as the usage of additional frequencies as they become available, are codified in the 802.11 standards.

What do those standards represent?

The IEEE 802.11 standard is a collection of technological advancements that have been created over a long period of time. Each new breakthrough is specified by a one- or two-letter suffix to "802.11," which represents the modification to the standard. Only the 2.4-GHz band was supported by the initial 802.11 standard, which allowed for speeds of up to 2 Mbps. New coding algorithms were implemented to 802.11b to enhance the speed to 6 Mbps. 802.11a included 5-GHz support and Orthogonal Frequency Division Multiplexing (OFDM) coding techniques, boosting speed to 54 Mbps. The 802.11g standard brought OFDM from the 802.11a standard to the 2.4-GHz range. 802.11n introduced a slew of high-throughput enhancements that increased throughput by a factor of ten, allowing high-end business access points to reach signaling throughputs of 450 Mbps.

As you may have noticed, the IEEE naming method for the standard is a little confusing, so the Wi-Fi Alliance has come up with some shorter names to make it easier to comprehend.

The alliance refers to 802.11ax Wi-Fi as Wi-Fi 6 — the current emerging standard. Wi-Fi 5 is now 802.11ac, while Wi-Fi 4 is now 802.11n. According to the Wi-Fi Alliance, the goal is to make it easier for the end-user to navigate through the myriads of routers and client devices.

Meanwhile, it's crucial to note that the Wi-Fi Alliance hasn't come up with new names for all of the 802.11 standards, so familiarity with the old ones is essential. Furthermore, the IEEE, which is still working on further versions of 802.11, has not accepted these new names, making it more difficult to find out information about them using the new names.

How secure is it?

The authentication security protocols defined by the Wireless Alliance, such as Wired Equivalent Privacy (WEP) and Wi-Fi Protected Access (WPA), are used to secure wireless security. There are now four wireless security protocols available:

  • Wired Equivalent Privacy (WEP);
  • Wi-Fi Protected Access (WPA);
  • Wi-Fi Protected Access 2 (WPA 2);
  • Wi-Fi Protected Access 3 (WPA 3).

To be sure your network is secure, you must first identify which network yours falls under.


The first security protocol to be implemented was Wired Equivalent Privacy (WEP). It was designed in 1997 and is now outdated, however, it is still used with older devices in present times.

WEP employs a data encryption technique based on a mix of user and system-generated key values. However, hackers have devised strategies for reverse-engineering and breaking the encryption mechanism, making WEP the least secure network type.


The Wi-Fi Protected Access (WPA) protocol was created to address the weaknesses in the WEP protocol. WPA includes features like the Temporal Key Integrity Protocol (TKIP), a dynamic 128-bit key that proved more difficult to crack than WEP's static, unchanging key.

It also had encryption features like the Message Integrity Check, which looked for any tampered packets transmitted by hackers and the Pre-shared key (PSK), among others.

As detailed in this article, both WEP and WPA are very hackable, so please, take our advice and never use them.


WPA2 introduced substantial updates and new features to the wireless security gambit in 2004. WPA2 substituted TKIP with the Counter Mode Cipher Block Chaining Message Authentication Code Protocol (CCMP), which is a significantly more sophisticated kind of encryption technology.

Since its creation, WPA2 has represented the industry standard; on March 13, 2006, the Wi-Fi Alliance specified that any future products using the Wi-Fi trademark must employ WPA2.


To connect to a wireless network, WPA2-PSK requires only one password. It's often assumed that using a single password to access Wi-Fi is safe, but only if you trust the people who use it. This is obviously not very secure, considering the fact that this key may fall into wrong hands. As a result, this protocol is most commonly used for home or open Wi-Fi networks.

To encrypt a network with WPA2-PSK, you ought to provide your router with a plain-English password between 8 and 63 characters long, rather than an encryption key. CCMP is used to create unique encryption keys for each wireless client using that passcode and the network SSID. Moreover, the encryption keys are updated on a regular basis.


WPA3 is the most recent (and improved) version of WPA2, which has been in use since 2004. In 2018, the Wi-Fi Alliance began certifying WPA3-approved equipment.

Although WPA3 is more secure than WPA2, the Wi-Fi Alliance will continue to maintain and enhance WPA2 for the foreseeable future. When compared to WPA2, WPA3 includes the following noteworthy features:

  • Stronger brute force attack protection: WPA3 defends against offline password guesses by allowing only one guess per user and forcing them to engage directly with the Wi-Fi equipment, requiring them to be physically present each time they wish to guess the password. In public open networks, WPA2 lacks built-in encryption and privacy, making brute force attacks a significant danger;
  • Simultaneous Authentication of Equals protocol (SAE): This protocol is used to provide a secure handshake between a network device and a wireless access point, in which both devices interact to verify authentication and connection. Even if a user's password is weak, WPA3 uses Wi-Fi DPP to give a more secure handshake;
  • Individualized data encryption: When connecting to a public network, WPA3 uses a mechanism other than a shared password to sign up a new device. WPA3 employs the Wi-Fi Device Provisioning Protocol (DPP), which allows users to let devices onto the network via NFC tags or QR codes. WPA3 security also employs GCMP-256 encryption instead of 128-bit encryption.

WPA3 functionality will not be extended to all devices automatically. Users who want to use WPA3-approved devices must either purchase new routers that enable WPA3 or hope that the device's manufacturer implements updates to support the new protocol.

We, at Passwork, highly recommend using the latest security protocols while constantly updating your router’s firmware. When you ignore critical updates, you risk exposing holes in your security that allow hackers to take control of your network. Use sophisticated and long passwords at all times. Even if we’re talking about your home Wi-Fi network – remember, if your password is ‘12345678’, your neighbours can easily hack into your network and spoof the data.

What is the IEEE 802.11 Standard and its security?

Jan 12, 2022 — 4 min read

End-to-end encryption has been introduced by many communication providers in recent years, notably WhatsApp and Zoom. Although those companies have tried to explain the concept to their user base several times, we believe they failed. Whilst it's clear that these platforms have increased security, most don’t know how or why. Well, encryption is a rather simple concept to understand: It converts data into an unreadable format. But what exactly does "end-to-end" imply? What are the advantages and disadvantages of this added layer of security? We'll explain this as simply as possible without diving too much into the underlying math and technical terminology.

What is end-to-end encryption?

End-to-end encryption (E2EE) is a state-of-the-art protocol for communication security. Only the sender and the intended recipient(s) have access to the data in an end-to-end encrypted system. The encrypted data on the server is inaccessible to both hackers and undesirable third parties.

End-to-end encryption is best understood when compared to the encryption-in-transit approach, so let’s perform a quick recap. If a service employs encryption-in-transit, it is usually encrypted on your device before being delivered to the server. It’s then decrypted for processing on the server before it’s re-encrypted and routed to its final destination. When the data is in transit, it’s encrypted, but when it’s ‘at rest’, it’s decrypted. This safeguards the data during the most dangerous stage of the journey, transit — when it’s most exposed to hackers, interception, and theft.

End-to-end encryption, on the other hand, is the process of encrypting data on your device and not decrypting it until it reaches its destination. When your message travels through the server, not even the service that is delivering the data can view the content of your message.

In practice, this means that messengers using 'real' end-to-end encryption, like Signal, know only your phone number and the date of your last login – nothing more.

This is important for users that want to be sure their communication is kept secure from prying eyes. There are also some real-life examples that utilize end-to-end encryption for financial transactions and commercial communication.

How does it work?

The generation of a public-private key pair ensures the security of end-to-end encryption. This method, also known as asymmetric cryptography, encrypts and decrypts the message using distinct cryptographic keys. Public keys are widely distributed and are used to encrypt or ‘lock’ messages. Only the owner has access to the private keys, which are needed to unlock or decrypt the communication.

Whenever the user takes part in any end-to-end encrypted communication, the system automatically generates dedicated public and private keys.

If this sounds too complicated, here is a very simple metaphor:

You just bought a new Rolex for your buddy, who lives in Australia. Now, it’s already in a fancy green leather box, so you decide to put the stamp directly on it and send it. There is nothing wrong with that approach as long as you trust that the postal workers won’t steal it.

However, if you decide to put the Rolex box inside another box, hiding the nature of the gift from all interacting parties along the way, then you’ve effectively ensured (for all intents and purposes) that the Rolex is only visible to the intended recipient; when your mate from down under gets a hold of the box, he takes his pair of scissors and ‘decrypts’ the present. Indeed, you’ve ensured ‘end-to-end’ encryption.

You’re already using end-to-end encryption, daily

As we mentioned before, during an E2EE interaction, the server that delivers encrypted data between one "end" and the other "end" is unable to decode and read the data it sends. Even the servers' owners are unable to access the information since it is not saved on the servers themselves, only the "endpoints" (or the devices) of the discussion can decode the data.

If you’re daily using messengers like WhatsApp, iMessage, and Signal (where E2EE is enabled by default) or Telegram, Allo, and Facebook's ‘Secret Conversation’ function (where E2EE can be manually activated) – you’re already using end-to-end encryption.

What's more fascinating is that E2EE communication providers don't require you to trust them. And that’s great!

The fact that their systems can be hacked makes no difference to you because the transported data is encrypted and can only be read by the sender and receiver, which has enraged several organizations. There are known cases when such agencies asked for special ‘backdoors’ that would allow them to decrypt messages.

Why isn’t everything end-to-end encrypted?

End-to-end encryption is theoretically sound, but it lacks flexibility, thus it can't be utilized when the "two ends" that communicate data don't exist, such as with cloud storage.

This is why Zero-Knowledge Encryption was created, a solution that overcomes the problem by hiding the encryption key, even from the storage provider, resulting in an authentication request without the requirement for password exchange.

Moreover, end-to-end encryption does not hide information about the message, such as the date and time it was sent or the people who participated in the conversation. This metadata might provide indications on where the 'end-point' might be – not great if you are the target of a hacker.

The biggest problem, however, is that in reality, we never know whether the communication is end-to-end encrypted. Providers may claim to provide end-to-end encryption when what they truly deliver is encryption-in-transit. The information might be kept on a third-party server that can be accessed by anybody who has access to the server.


While it’s obvious that you shouldn’t be shipping Dave’s Rolex in its fancy green box, the reality is, if you’ve nothing to hide and you’re not transporting something incredibly valuable, encryption-in-transit is up to the job.

End-to-end encryption is a wonderful technology that enables a high level of security when properly implemented. But it doesn't really tackle the main issue – the end-user, still, to this day, needs to trust the system that they’re using to communicate. We hope that the next generation of encryption technologies such as ZKP will be able to change that.

What is End-to-end encryption?

Jan 10, 2022 — 4 min read

In this year of our lord, 2022, the term ‘Zero-Knowledge Encryption’ equates to best-in-class data insurance. We’ve already written an article named “What is Zero-Knowledge Proof?”, so we’re not going to look at definitions here, but rather, we’re going to explore the pros and cons of Zero-Knowledge proof encryption when compared to other technologies.

But for those who don’t want to dive deep into technical details, here’s an explanation of what Zero-Knowledge Encryption means:

It simply implies that no one else (not even the service provider) has access to your password-protected data.

This is important because even if your files are completely encrypted, if the server has access to the keys, a centralized hacker attack can result in a data breach.

In order to gain a better understanding of the factors that led to the development of Zero-Knowledge Encryption, we've decided to present a succinct, yet comprehensive, assessment of the advantages and disadvantages of three existing options:


Data in-transit, also known as data in motion, is data that is actively flowing from one point to another, such as that over the internet or over a private network. Data protection in transit refers to the security of data while it is being transferred from one network to another or from a local storage device to a cloud storage device. Effective data protection measures for in-transit data are critical because data is often considered less secure while in transit. Think of it like hiring security guards to accompany your cash-in-transit vehicle’s trip to the bank.

This means that, while using this approach, stored docs are 100% decryptable, so vulnerable.

As for our everyday life, the following technologies use the ‘encryption-in-transit’ approach:


Any data encryption is the process of converting one type of data into another that cannot be decrypted by unauthorized users. For example, you may have saved a copy of your passport. You obviously don't want this data to be easily accessed. If you store encrypted data on your server, it’s effectively "resting" there (which is why it’s called encryption-at-rest). This is usually accomplished by the use of an algorithm that is incomprehensible to a user who does not have access to the encryption key needed to decode it. Only an authorized person will be able to access the file, ensuring that your data is kept safe.

The Advanced Encryption Standard (AES) is often used to encrypt data at rest.

But, in order to access the data, you need a key — and that’s where the potential vulnerability lies.

Encryption-at-rest is like storing your data in a secret vault, encryption-in-transit is like putting it in an armored vehicle with security guards for transport.

End-to-end Encryption

End-to-end encryption is the act of applying encryption to messages on one device so that only the device to which it is sent can decrypt it. The message travels all the way from the sender to the recipient in encrypted form.

In practice, it means that only the communicating users (who have the key) can read the messages.

End-to-end encryption has created an impregnable fortress for communication services (for example, messengers), going beyond the security "façade" of encryption-in-transit and encryption-at-rest solutions.

This is the most common approach when protecting oneself against data breaches nowadays, but it only works from "one end to the other," as the term implies. Even though this all sounds great, end-to-end encryption can only be used for a "communication system" like Whatsapp or Telegram.

While theoretically sound, end-to-end encryption lacks flexibility, so it can’t be used when the "two ends" that share data don't exist, such as for cloud storage.

This is the motivation behind the development of Zero-Knowledge Encryption, a method that solves the problem by hiding the encryption key, even from the storage provider, resulting in an authentication request without the need for password exchange.

Zero-Knowledge Encryption

To log in to an account, you usually have to type in the exact password. In today's hyperconnected world, it's normal practice to tell the server your secret key ahead of time and test whether it matches.

Instead, there is another, more secure way, to manage this delicate process and that’s called Zero-Knowledge Encryption.

Without diving deep, The Zero-Knowledge relies on three main requirements:

  1. Completeness — an honest prover will be able to convince the verifier that he has the password by completing some process in the required way;
  2. Soundness — the verifier will almost certainly discover when the prover is lying;
  3. Zero-knowledge — if the prover has a password, the verifier receives no more information other than the fact that the statement is true.

Essentially, the system will check to see if you can demonstrate your knowledge several times by responding to various conditions. It’s like a brute force attack carried out backwards — you perform the same action many times in order to make sure that the prover isn’t lying.

Instead of concluding, let’s round up the pros and cons of Zero-Knowledge proof encryption when compared to the alternatives:

The con here is a clear example of the exceptional security provided by the Zero-Knowledge Encryption solution, which prevents even system administrators from recovering your password. This is why we, at Passwork, rely on this technology in our products. Ultimately, that’s why you can rely on us too.

Why Zero-Knowledge Encryption is the best

Dec 30, 2021 — 4 min read

Many times, we’ve mentioned self-signed certificates and their most common use cases in our blog. After all, the main difference between a regular certificate and a self-signed one is that in the latter case, you act as the CA (Certificate Authority). But there are a variety of services that provide CA services for free, with the most popular being ‘Let’s Encrypt’, which is going to be the subject of this article.

What’s that?

Let’s Encrypt’ is a free certificate authority developed by the Internet Security Research Group (ISRG).

It provides free TLS/SSL certificates to any suitable client via the ACME (Automatic Certificate Management Environment) protocol. You can use these certificates to encrypt communication between your web server and your users. ‘Let's Encrypt’ provides two types of certificates. Single-domain SSL and Wildcard SSL, which covers a single domain and all of its subdomains. Both types of SSL certificates have a 90-day validity period. These domain-validated certificates do not require a dedicated IP address. They accomplish this by delivering the client a unique token and then retrieving a key generated from that token via an HTTP or DNS request.

There are dozens of clients available which can be easily integrated with a variety of standard administrative tools, services, and servers. They also come written in a range of different computer languages.

We'll use the win-acme client in this tutorial because it's a basic, open-source, and constantly updated command-line application. It not only produces certificates but also automatically installs and renews them. And yes, this tutorial is for Windows users.

How does it work?

‘Let's Encrypt’ verifies the ownership of your domain before issuing a certificate. On your server, the Let's Encrypt client creates a temporary file (a token) with the required information. The Let's Encrypt validation server then sends an HTTP request to get the file and validates the token, ensuring that your domain's DNS record resolves to the ‘Let's Encrypt’ client-server.

In an HTTP-based challenge, for example, the client will generate a key from a unique token and an account token, then save the results in a file that the web server will serve. The file is then retrieved from the Let's Encrypt servers at:

The client has demonstrated that it can control resources on if the key is correct, and the server will sign and provide a certificate.

How do I set it up?

Before we start:

  • Make sure that you’ve downloaded the latest version of the application on the server from its Github release page;
  • Scroll down to ‘assets’ and download the zip package named from the release page. If you're having difficulty with Internet Explorer, you may install Chrome on the server following this approach. Once the application has been downloaded, unpack it and save it somewhere safe for future use.

Now let’s Generate the Let’s Encrypt Certificates

Simply run wacs.exe to generate the Let's Encrypt certificates. Because we downloaded the application via the internet, you may receive a notification from Windows Defender claiming that "Windows protected your PC". Because of this, after clicking the "More Info" link, click the "Run Anyway" option. Because it’s open-source and widely utilized, the application is completely safe to use.

Follow these simple steps once the application has started:

  • Choose N in the main menu to create a new certificate with default settings;
  • Choose how you want to determine the domain name(s) that you want to include in the certificate. These may be derived from the bindings of an IIS site, or you can input them manually;
  • A registration is created with the ACME server if no existing one can be found. You will be asked to agree to its terms of service and to provide an email address that the administrators can use to contact you;
  • The program negotiates with the ACME server to try and prove your ownership of the domain(s) that you want to create the certificate for. By default, the http validation mode is picked and handled by our self-hosting plugin. Getting validation right is often the most tricky part of getting an ACME certificate. If there are problems, please check out some of the common issues for an answer;
  • After the proof has been provided, the program gets the new certificate from the ACME server and updates or creates IIS bindings as required, according to the logic documented here;
  • The program remembers all choices that you made while creating the certificate and applies them for each subsequent renewal.

For advanced instructions, visit this page.

And that’s pretty much it. It will successfully generate an SSL certificate for you if your domain is pointing to your server. It will also include a scheduled task that will renew the certificate when it expires. The SSL certificate will be installed automatically by the application.

Are there other options?

‘Certbot’ is the most widely used kind of ‘Let's Encrypt’ client. We didn’t give it much light in this article because it's “designed for Linux” and also a little more advanced. It comes with easy-to-use automatic configuration features for Apache and Nginx. And yes, there is a Windows version as well.

There are many other clients to choose from – the ACME protocol is open and well-documented. On their website, ‘Let's Encrypt’ keeps track of all ACME clients.

Here’s a list of the best options (n.b. most are for Linux):

  • lego. Lego is a one-file binary installation written in Go that supports many DNS providers;
  • is a simple shell script that can run in non-privileged mode and interact with more than 30 different DNS providers;
  • Caddy. Caddy is a full web server written in Go with built-in support for Let’s Encrypt.

‘Let’s Encrypt’ is just great, there are no other ways to put it. It’s a free, automated, and open certificate authority, run for the public’s benefit. It can be accessed via a variety of tools and services. The best part is, they really keep their motto close to heart:

“We give people the digital certificates they need in order to enable HTTPS (SSL/TLS) for websites, for free, in the most user-friendly way we can. We do this because we want to create a more secure and privacy-respecting Web for all.”

An Overview of ‘Let's Encrypt’

Dec 20, 2021 — 4 min read

It is rare for technologies to be born from ambitious philosophical concepts or mind games. But, when it comes to security and cryptography – everything is a riddle.

One of such riddles is ‘How can you prove that you know a secret without giving it away?’. Or in other words, ‘how can you tell someone you love them without saying that you love them?’.

The Zero-Knowledge Proof technique, as suggested by the name, uses cryptographic algorithms to allow several parties to verify the authenticity of a piece of information without having to share the material that makes it up. But how is it possible to prove something without supporting evidence? In this article, we’ll try our best to break it down for you as easily as possible.


We’re asking ourselves day after day – why on Earth would people decide to use such a complicated concept. Well, millions of people use the internet every day, accepting cookies and sharing personal information in exchange for access to services and digital products. Users are gradually becoming more vulnerable to security breaches and unauthorized access to their data. Furthermore, individuals frequently have to give up their privacy in return for digital platform services such as suggestions, consultations, tailored support, and so on, all of which wouldn’t be available when browsing privately. Due to all the above mentioned, there is a certain asymmetry regarding access to information – you give your information in exchange for a service.

In 1985, three great minds noticed ‘a great disturbance in the Force’ ahead of their time and released a paper called "The Knowledge Complexity of Interactive Proof-Systems" which introduced the concept of Zero-Knowledge Proof (ZKP) for the first time.

So what is it?

ZKP is a set of tools that allows an item of data to be evaluated without having to reveal the data that supports it. This is made feasible by a set of cryptographic methods that allow a "tester" to mathematically prove to a "verifier" that a computational statement is valid without disclosing any data.

It is possible to establish that particular facts are correct without having to share them with a third party in this way. For example, a user could demonstrate that he is of legal age to access a product or service without having to reveal his exact age. Or, it’s a bit like showing your friend your driving license instead of proving to him that you can drive by road-tripping to Mexico.

This technique is often used in the digital world to authenticate systems without the risk of information being stolen. Indeed, it’s no longer necessary to provide any personal data in order to establish a person's identity.

Sounds great, but how does it work?

The prover and the verifier are the two most important roles in zero-knowledge proofs. The prover must demonstrate that they are aware of the secret whereas the verifier must be able to determine whether or not the prover is lying.

It works because the verifier asks the prover to do actions that can only be done if the prover is certain that he or she is aware of the secret. If the prover is guessing, the verifier's tests will catch him or her out. If the secret is known, the prover will pass the verifier's exam with flying colours every time. It's similar to when a bank or other institution requests letters from a known secret word in order to authenticate your identity. You're not telling the bank how much money you have in your account; you're simply demonstrating that you know.

Wonderful, but how does it REALLY work?

To answer this, let’s take a look at a piece of research by Kamil Kulesza.

Assume that two characters, Alice and Bob, find themselves at the mouth of a cave with two independent entrances leading to two different paths (A and B). A door inside the cave connects both paths, but it can only be unlocked with a secret code. This code belongs to Bob (the 'tester,') and Alice (the 'verifier,') wants to buy it, but first, she wants to make sure Bob isn't lying.

How can Bob demonstrate to Alice that he has the code without divulging its contents? They perform the following to achieve this: Bob enters the cave via one of the entrances at random while Alice waits outside (A or B). Once inside, Alice approaches the front door, summons Bob, and instructs him to use one of the two exits. Bob will always be able to return by the path that Alice used since he knows the secret code.

Bob will always be able to return via the path that Alice directs him to, even if it does not coincide with the one he chose in the first place, because he can unlock the door and depart through the other side with the secret code.

But wait a minute, there is still a 50% chance that both Alice and Bob chose the same path, right? It is correct indeed, however, if this exercise is repeated several times, the likelihood that Bob will escape along the same path chosen by Alice without possessing the code decreases until it is almost impossible. Conclusion? If Bob leaves this path a sufficient number of times, he has unmistakably shown to Alice that his claim of holding the secret code is true. Moreover, there was no need to reveal the actual code in this case.

You can find out more about the Bob and Alice metaphor here.

Got it, so how is it used?

As for right now, ZKP is developing hand in hand with blockchain technology.

Zcash is a crypto platform that uses a unique iteration of zero-knowledge proofs (called zk-SNARKs). It allows native transactions to stay entirely encrypted while still being confirmed under the network's consensus rules. It’s a great example of this technology being used in practice.

Even though zero-knowledge proofs have a lot of potential to change the way today's data systems verify information, the technology is still considered to be in its infancy — primarily because researchers are still figuring out how to best use this concept while identifying any potential flaws. This, however, doesn’t stop us from using this protocol in our products! ;)

For a deeper understanding of the technical aspects and history behind this protocol, we recommend watching this video on YouTube.

What is Zero-knowledge Proof?

Dec 14, 2021 — 5 min read

The Secure Sockets Layer (SSL) and the Transport Layer Security (TLS) cryptographic protocols have seen their share of flaws, like every other technology. In this article, we would like to list the most commonly-known vulnerabilities of these protocols. Most of them affect the outdated versions of these protocols (TLS 1.2 and below), although one major vulnerability was found that affects TLS 1.3.


This cute name should not misguide you – it stands for Padding Oracle On Downgraded Legacy Encryption. Not that nice after all, right? It was published in October 2014 and it takes advantage of two peculiar facts. The first one is that some servers/clients still support SSL 3.0 for interoperability and compatibility with legacy systems. The second one is a vulnerability related to block padding that appears in SSL 3.0. Here is a link to that vulnerability on the NIST NVD database.

How does it work?

As mentioned before, this vulnerability comes from the Cipher Block Chaining (CBC) mode. Those block ciphers use blocks of a fixed length, so if the data in the last block is shorter than the block’s size, it is filled by padding. The server, of course, ignores such padding by default, it only checks whether the length is correct and verifies the Message Authentication Code (MAC) of the plaintext. In practice, this means that the server cannot verify if the content inside the padding has been altered in any way.

An attacker is able to modify padding data and then simply waits for the server’s response. Attackers can recover the contents byte-by-byte by watching server responses and altering the input. It usually takes no more than 256 attempts to reveal one byte of a cookie, which equates to a maximum of 4096 queries for a 16-bit cookie. Just by using automated scripts, the hacker may retrieve the plaintext char by char. Such a plaintext may be anything from a password to a cookie or a session. So, without even knowing the encryption method or key, a sniffer may easily decipher a block.


The Browser Exploit Against SSL/TLS attacks was disclosed in September 2011. It affects browsers that support TLS 1.0, because this early version of the protocol has a vulnerability when it comes to the implementation of the above-mentioned CBC mode. Here’s a link to it in the NIST NVD database.

How does it work?

This kind of attack is usually client-side, so in order for it to succeed, an attacker should, in one way or another, have some control of the clients’ browser. After getting access to a browser, the attacker would typically use MITM to inject packets into the TLS stream. After such an injection, the only thing left is to guess the Initialization Vector (IV) – a random block of data that makes each message unique. This is used with the injected message and then simply compared with the results of the needed block.


The Compression Ratio Info-leak Made Easy (CRIME) vulnerability affects TLS compression. The Client Hello message optionally uses the DEFLATE compression method, which was introduced to SSL/TLS to reduce bandwidth. The main method used by any compression algorithm is to replace repeated byte patterns with a pointer to the first instance of such sequences. As a result – the bigger the repeated sequences, the more effective such a compression is. You can visit this link to locate this vulnerability in the NIST NVD database.

How does it work?

Let’s imagine that a hacker is targeting cookies. He knows that the targeted website creates a cookie for each session named ‘ss’. The DEFLATE compression method replaces repeated bytes and our antagonist knows that. Therefore, he injects Cookie:adn=0 to the victim’s cookie. The server would, of course, append 0 to the compressed response, because Cookie:adn= has been already sent, so it is repeated.

After that, the only thing for an attacker to do is to inject different characters and then monitor the size of the response.

The injected character is contained in the cookie value if the answer is shorter than the first one, hence the compression. The response will be longer if the character is not in the cookie value. Using this strategy, an attacker can recreate the cookie value using the server's feedback.

By the way, the BREACH attack is very similar to CRIME, but it targets HTTP compression instead.


Heartbleed was a major vulnerability discovered in the OpenSSL (1.0.1) library's heartbeat extension. This extension is used to maintain a connection so long as both participants are online. Here is its link on our beloved NIST NVD database.

How does it work?

The client sends the server a heartbeat message with a payload containing data and its size + padding. The server will respond with the same heartbeat request as the client.

The vulnerability comes from the fact that if the client sent a false data length, the server would respond with a heartbeat response containing not only the data received by the client but, because it needs to fill the length of the message, also with random data from its memory. A very unfortunate technical solution, isn’t it?

Leaking such unencrypted data may result in a disaster. It is well-documented that by using such a technique, an intruder may obtain a private key to the server. Moreover, if the attacker has the private key – well, that means he’s able to decrypt all the traffic to the server. I guess we don’t need to tell you why that’s no good, right?

The entire list of vulnerabilities may be found in this wonderful report by the Health Sector Cybercecurity Coordination Center. It not only lists all above-mentioned techniques in a very informative manner but also provides a very neat table of all ‘Known Threats to TLS/SLS (pp. 24-26).

Instead of a long conclusion, let’s just look at the last page of that report:

The green cell of the chart explicitly states how to get rid of most known vulnerabilities. Indeed, if you’re not interested in trawling through the nooks and crannies of the Library of Alexandria’s floor on Cybersecurity, the best way to track existing SSL/TLS vulnerabilities is by visiting once in a while. You might even consider making it your homepage.

What are SSL Vulnerabilities?

Dec 2, 2021 — 4 min read

Security, security, security… There is no way one can underestimate the importance of it when it comes to caring for private files and sensitive data. As long as the world of cybersecurity is privy to the constant conflict between hackers and programmers, fully protecting yourself and your business will forever be impossible. But, as we know, hackers aren’t always using state-of-the-art techniques. Often, they’re still getting in by guessing your username and password.

Most popular kinds of technology are under a constant barrage of hacking attempts, which is why it is so important to follow simple protocols to save yourself both time and money. One of such technologies is SSL/TLS. It is used on almost every web service, and even though it may seem straightforward to set up, there are many arcane configurations and design choices that need to be made to get it ‘just right’.

This guide will provide you with a short ‘checklist’ to keep in mind when setting up or maintaining SSL/TLS with a focus on security. All information is accurate and up to date as of December 2021 and is based on both our experience and other guides made on this topic.

Track all your certificates

First and foremost, you should check up on all existing certificates that are used by you and your organisation. This covers all information about them, such as their owners, locations, expiration dates, domains, cipher suites, and TLS versions.

If you don’t know about, or don’t track existing certificates as well as weak keys and cipher suites, you expose yourself to security breaches connected to expiring certificates.

An easy way to list all your certificates is to get them from your CA. This may not work if you used self-signed certificates, which require additional attention in terms of tracking/listing. The second way, which is typically quite effective, is to obtain certificates using network scanners. Hopefully, the enormous number of certifications that you were unaware of will not surprise you. Your certificate ‘inventory’ should focus on details such as OS and applications like Apache, just because your organization could be vulnerable to exploits that attack specific versions of things like OpenSSL (i.e., Heartbleed).

So, your list of certificates should include:

  1. Certificates issued
    • Certificate types
    • Key sizes
    • Algorithms
    • Expiration dates
  2. Certificate locations
  3. Certificate owners
  4. Web server configurations
    • O/S versions
    • Application versions
    • TLS versions
    • Cipher suites

Don’t use weak keys, cipher suites or hashes

Every certificate has a public key and a signature, both of which may be vulnerable if they were created with outdated technology. On public web servers, certificates with key lengths of less than 2048 bit or those that employ older hashing algorithms like MD5 or SHA-1 are no longer allowed. These may, however, be found on your internal services. If that's the case, you'll need to upgrade them.

The requirement to check TLS/SSL versions and cipher suites supported on your web servers is much more crucial than finding certificates with weak keys or hashes.

The following versions are outdated and must never be used:
• SSL v2
• SSL v3
• TLS 1.0
• TLS 1.1
Instead, enable TLS 1.2 and TLS 1.3.

The following cipher suites are vulnerable, and must be disabled:
• 3DES
• RC4
Instead, use modern ciphers like AES.

Install and renew all certificates on time

We recommend renewing a certificate at least 15 days before it expires to allow time for testing and reverting to the prior certificate if any problems arise.

Users should be notified when certificates expire, regardless of the mechanism you employ. Before expiration, the system should alert users automatically and at regular intervals (e.g., 90 days). Self-signed certificates should never have an expiration date of more than 30 days.

Never reuse key pairs for new certificates

By the same token, never reuse CSRs (Certificate Signing Requests) — this will automatically reuse the private key. Reusing keys leads to key pair vulnerability. Don’t be lazy when it comes to your security.

Review the CA that you use

Your certificates are only as trustworthy as the CA that issues them. All publicly trusted CAs are subject to rigorous third-party audits to maintain their position in major operating system and browser root certificate programs, but some are better at maintaining that status than others.

Use Forward Secrecy (FS)

Also known as perfect forward secrecy (PFS), FS assures that a compromised private key will not also compromise past session keys. To enable it, use at least TLS 1.2 and configure it to use the Elliptic Curve Diffie-Hellman (EDCHE) key exchange algorithm. The best practice here would be to use TLS 1.3 as it provides forward secrecy for all TLS sessions using the Ephemeral Diffie-Hellman key exchange protocol ‘out of the box’.


DNS CAA is a standard that allows domain name owners to restrict which CAs can issue certificates for their domains. In September 2017, the CA/Browser Forum mandated CAA support as part of its certificate issuance standard baseline requirements. The surface area prone to attack is certainly shrunk with CAA in place, effectively making sites more secure. If CAs have an automated mechanism in place for certificate issuance, they should check for DNS CAA records to prevent certificates from being issued incorrectly. It is advised that you add a CAA record to your certificate to whitelist a CA. Add certificate authorities (CAs) that you trust.

Keep in mind that encryption is not an option.

Enforce encryption across your entire infrastructure — no 'bald spots' should remain. Leaving elements of your infrastructure unprotected is like forgetting to lock the door when you leave — it's not a good idea.

The practices that we’ve mentioned in this list are more about common sense, rather than knowledge acquired through the ultra-secret bureau. When it comes to security, you should always keep your infrastructure up-to-date. New vulnerabilities pop up every day, but still, we, humans, pose the biggest threat of all. In other words, human error has more to answer for than we might first believe. Because of this, double-check everything you do. Security protocols should be easy to follow not only by the person who creates them but also by everyone who interacts with your infrastructure — users and employees alike.

With such a checklist, it’s impossible to shine a light on each and every nook and cranny, so for those of you who want to ensure that you’re following all best practices, consider checking out this list.

SSL best practices to improve your security

Nov 25, 2021 — 7 min read

Most web servers across the internet and intranets alike use SSL certificates to secure connections. These certificates are traditionally generated by OpenSSL – a software library containing an open-source implementation of the SSL and TLS protocols. Basically, we’re looking at a core library, providing us with a variety of cryptographic and utility functions. Because of its ease-of-use and, most importantly, because it’s open-source (so, free), it managed to make its way to the top, and now, it’s the industry standard.

OpenSSL is available for Windows, Linux and MacOS. So, before you get started, make sure that you have OpenSSL installed on your machine. Here’s a list of precompiled binaries for your convenience. But, to be honest, the OS doesn’t really matter here too much – the commands are going to be identical in our case.

In this tutorial, we’ll show you how easy it can be to generate self-signed certificates with OpenSSL.

Such a self-signed certificate is great if you want to use HTTPS (HTTP over TLS) to secure your Apache HTTP or Nginx web server, and you know that your certificate doesn’t need to be signed by a CA.

How should I use OpenSSL?

OpenSSL is all about its command lines. Below, we’ve put together a few common OpenSSL commands that regular users can fiddle about with to generate private keys. After each command, we’ll try to explain what that exact line of code does by breaking it down into its constituent parts. If you fancy studying all of the commands, take a look at this page.

How can I generate self-signed certificates?

Let’s start! First and foremost, we want to check whether we have OpenSSL installed. To do that, we need to run:

openssl version -a

If you get something like this, you’re on the correct path:

OpenSSL 3.0.0 7 sep 2021 (Library: OpenSSL 3.0.0 7 sep 2021)
built on: Tue Sep  7 11:46:32 2021 UTC
platform: darwin64-x86_64-cc
options:  bn(64,64)
OPENSSLDIR: "/usr/local/etc/openssl@3"
ENGINESDIR: "/usr/local/Cellar/openssl@3/3.0.0_1/lib/engines-3"
MODULESDIR: "/usr/local/Cellar/openssl@3/3.0.0_1/lib/ossl-modules"
Seeding source: os-specific
CPUINFO: OPENSSL_ia32cap=0x7ffaf3bfffebffff:0x40000000029c67af

This code effectively specifies the SSL version that you have installed and some other details.

Now, the first important thing on our agenda is to generate a Public/Private keypair. To do this, we ought to punch in the following command:

openssl genrsa - out passwork.key 2048

genrsa – is the command to generate a keypair with the RSA algorithm;

-out passwork.key – this is the output key file name;

2048– key size. Make sure to double-check your requirements at this stage, because that value may change depending on your use case.

Now we have the file ‘passwork.key’ in a directory that we’ve specified. Once we’ve created a file we can, for example, extract the public key by running:

openssl rsa -in passwork.key -pubout -out passwork_public.key

rsa – we ought to specify the algorithm that we used;

-in passwork.key – we take the existing keypair;

-pubout – here, we take only public key;

-out passwork_public.key – we’re exporting this as a file with name passwork_public.key.

Now we may proceed to creating a CSR – Certificate Signing Request. In a real production scenario, such a CSR is forwarded to the CA which signs it on your behalf, so you get a certificate. But for the sake of our tutorial, we’ll create a CSR and self-sign it.

The command to create a CSR is as follows:

openssl req -new -key passwork.key -out passwork.csr

req -new – here, we’re specifying that we want to create something new;

-key passwork.key – here, we’re specifying the key that we will use;

-out passwork.csr – here, we’re specifying the output file.

After pressing ‘Enter’ you’ll see something like this:

You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields, there will be a default value,
If you enter '.', the field will be left blank.

Enter the required data. For the purposes of our tutorial, we’ve entered the following fake values:

Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:FL
Locality Name (eg, city) []:Tallahassee
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Passwork
Organizational Unit Name (eg, section) []:.
Common Name (e.g. server FQDN or YOUR name) []:*
Email Address []
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

One of the most important fields is Common Name, which should match the server name or FQDN where the certificate is going to be used.

After all the above-mentioned steps, we’ll have our CSR file fully generated. In the real world, at this stage, we’d want to verify our CSR file, just so we’re not passing the wrong file to the CA. Also, it’s good practise to double-check whether the FQDN is correct.

Having all that in mind, we run:

openssl req -text -in passwork.csr -noout -verify

The output, in our case, would be:

Certificate request self-signature verify OK
Certificate Request:
      Version: 1 (0x0)
      Subject: C = US, ST = FL, L = Tallahassee, O = Passwork, CN = *, emailAddress =
      Subject Public Key Info:
         Public Key Algorithm: rsaEncryption
            Public-Key: (2048 bit)
            Exponent: 65537 (0x10001)
         Requested Extensions:
   Signature Algorithm: sha256WithRSAEncryption
   Signature Value:

As you can see, it lists crucial data that you provided when you answered questions related to the CSR. Check the values and if something is wrong – simply re-generate the CSR before you pass it on to the CA.

As we mentioned before, instead of passing our CSR to the CA, we’ll create a self-signed certificate. In order to do that, we ought to enter the following:

openssl x509 -in passwork.csr -out passwork.crt -req -signkey passwork.key -days 30

The  ‘X509’ command is our multi purpose certificate utility;

-in passwork.csr represents our CSR;

-out passwork.crt is the name and file extension for our certificate;

-req signkey passwork.key – here, we’re specifying the keypair that we want to use to sign our certificate;

-days 30 – this is an expiration time interval for our certificate.

The passwork.crt certificate file has now been generated, so it’s ready to go! Pretty easy, right? Well, it gets a lot easier when we remember that we can generate a self-signed certificate by just entering:

openssl req \
-newkey rsa:2048 -nodes -keyout passwork.key \
x509 -days 365 -out domain.crt

Here, a temporary CSR is generated, so we don’t have to enter all the data manually.

Please bear in mind that in real-life situations, you should follow best practice when creating a private key, they can be easily generated and therefore compromised.


We’ve barely touched the functionalities within OpenSSL, but as you can see, it’s not as complicated as many people first think – that’s why it’s loved far and wide. If you’ve still got any unanswered and burning questions, feel free to check out the frequently-asked questions (FAQ) page on the OpenSSL project’s website. If that’s not enough and you’re really looking for a deep dive – we can’t recommend this free e-book highly enough.

What OpenSSL is used for?

Nov 22, 2021 — 4 min read

The SSL/TLS protocol’s job is to ensure security through authentication. It was designed to encrypt data transmitted over open networks and, as a result, protect against interception and spoofing attacks. TLS also authenticates communicating parties, which leaves us with a pretty trusting environment. It goes without saying that security through authentication is essential for a successful business in the 21st century.

If we closely observe the way in which SSL works, it becomes very clear very fast, that to establish the ‘trusting environment’, SSL certificates need to be signed and validated by a trusted Certificate Authority (CA). Now, while everyone trusts the CA, by extension, they are able to trust those with its certification. Traditionally, organizations have used CAs to sign their SSL/TLS certificates, but with an influx of digital products, a huge amount of software being developed and tested in addition to an all-time data breach record, many companies are switching to self-signed certificates.

What is a Self-Signed SSL Certificate?

A self-signed certificate is a digital certificate that hasn’t been signed by a publicly trusted CA. Instead, it is issued and signed by the entity that is responsible for the software. This, on the one hand, makes deployment pretty frictionless, but on the other, it comes with additional risk, especially when poorly implemented.

Although they can be risky, self-signed certificates are incredibly widespread. These certificates are available with no associated costs and can be requested easily by anyone, which is fantastic for internal testing environments or web servers that are otherwise locked for external users. Moreover, such a certificate still uses the same encryption methods as other, paid SSL/TLS certificates – that’s a piece of very good news for organizations because nobody wants their data leaking. As long as the CA doesn’t require a certificate expiry date, a self-signed certificate may be issued once and used till the end of time. This is used, for example, when working on some secret projects or simply with internal data.

For many companies that use self-signed certificates, the biggest advantage is, of course, independence. All security infrastructure is encapsulated inside the internal network, so even if such a network isn’t connected to the web at all – it’ll still work as intended.

Although this looks very convenient on the surface, it is one of the major concerns when dealing with these types of certificates. Offline, they aren’t able to receive security updates in response to discovered vulnerabilities, nor meet the certificate agility which is essential to secure today's modern enterprise.

Another challenge that arises when dealing with self-signed certificates is that responsible departments often lack visibility over how many were issued, where they are used, by whom, and also how the private key is stored. It’s hard enough to keep track of certificates issued by different public and private CAs. It’s almost impossible to track all self-signed certificates without an additional request process.


  • Fast and easy to issue;
  • Useful in test environments;
  • Flexibility;
  • Independence;
  • No expiration date.


  • No security updates;
  • Can’t be easily revoked;
  • Lack of visibility and control.

Let’s imagine that our internal network has been breached. If we use self-signed certificates, there is literally no way of knowing if it, and private keys associated with it, have been compromised. Once compromised, such a certificate may be used to spoof identities and gain access to important data, especially considering the fact that, unlike CA-issued certificates, self-signed certificates cannot be revoked and, as we mentioned before, have no expiration date. You cannot simply ‘revoke’ a private key in such a situation.

So, why are self-signed certificates still in use? The simple answer is that it’s convenient. The routine manual process of submitting a certificate signing request (CSR) and waiting hours for verification is just horrible. To save time and frustration, it makes more sense to opt for self-signed certificates.

So the biggest question on self-signed certificates of any type is not how to issue them, but how to properly implement them inside an organisation. It’s like making sushi – the recipe is very simple, but the devil is in the details.

Some risks may be indirect – let’s imagine we’re looking to use a self-signed certificate to provide access to an employee portal. It will cause any default browser to alert the user with warnings. As these alerts can be ignored, many organisations tend to instruct their employees to do exactly that – ignore warnings. The safety of the internal portal is assured, so there is no direct harm, but, at the same time, employees ‘learn’ to ignore alerts and warnings the same way we all ignore ads on websites. Such practices make the organisation overall more vulnerable. The crux of the matter is that employees just don’t provide essential feedback on time if something goes very wrong.

To get the best out of self-issued certificates and mitigate the risks involved, we recommend using OpenSSL to issue certificates. It is de facto an industry standard. But, as mentioned before, this is not enough. Correct implementation is even more important than the tools used. After all, a top of the line DeWalt grinder is going to be useless if you’re using it to hammer in a nail. So, when implementing a self-signed certificate try to follow these best practices:

  • Limit the expiration period, it should be as short as possible. Never use certificates that don’t feature an expiry date.
  • Limit usage. Never create ‘universal certificates’ that open all doors at once.
  • Use a meaningful and informative ‘subject’ record. Everybody should understand what the certificate is being used for.
  • Make sure that the algorithm used for the signature is at least SHA256WITHRSA (which is the default in OpenSSL).
  • Create only encrypted private keys.
  • Use elliptic curve keys as opposed to the default RSA ones; they provide a number of benefits over RSA.
  • Most importantly, create a repeatable/scriptable process for issuing certificates and keys. OpenSSL is a de-facto standard command-line tool that can be used as the basis for this process.

Self-signed certificates are fast and easy to use, they are great for test environments, or when providing encrypted access to internal data. They provide independence and are free to use. That is why they are used across the board in so many companies. On the whole, there’s really no other way to do things right. But remember, SSL/TLS self-signed certificates are like fugu fish – it is delicious when cooked well, but you’ll drop dead if the chef’s on his lunch break.

What is a Self-Signed Certificate?

Nov 17, 2021 — 5 min read

Stuck between a proxy and a hard place

Let’s imagine that you’re managing a small team, all of whom are coming back to work after a relaxing furlough period. Of course, you’re going to notice a drop in productivity; your team has become accustomed to browsing YouTube between Zoom calls and messaging their friends on Facebook. The solution? A ‘forward proxy’, which is the kind of proxy you’re likely to be familiar with. This will make sure that employees are prompted to get the ‘Pass’ back to work, should they try to access social networking.

Now, perhaps you have an update scheduled for your website, but you’re still not sure whether you’ve caught all the bugs. Or, maybe you want to scale your infrastructure in a ‘plug'n'play’ way. How are you going to test out new features on a certain percentile of your users? Here, you’ll find faithful solace in the almighty powers of the ‘reverse proxy’.

Whether you’re looking to learn more about forward or reverse proxies, today we’ll take a deep dive and explore how you can level up your business’ IT infrastructure through their use.

So, what’s a proxy server?

A proxy server is effectively a gateway between networks or protocols. They usually separate the end-user from the server. They also can alter or redirect the connection or data that passes through. Moreover, proxies come in a variety of 'tastes and colours' depending on their use case, the system complexity, privacy requirements, and so on.

If you’re using a proxy server, any data you send to the external network ought to flow through it beforehand. Also, it works both ways, so you, the client, cannot be reached by someone on the external net without that data first being sent through the proxy.

Importantly, there are two main types of proxies: Forward Proxies and Reverse Proxies. And even though the principle behind these two is similar, their use cases differ greatly.

Forward Proxies

In your day-to-day life, you’ll encounter forward proxies the most – these kinds of proxies sit between the client and the external network. They evaluate outbound requests and take action on them before relaying those requests to the external resources. Forward proxy servers allow the redirecting of traffic, meaning that if you have a proxy server installed within your local enterprise or on your home network, you’re able to effectively block choice websites. Maybe you don't want your kids to watch Netflix, or Dave from accounting keeps stalking his ex on Facebook when he should be writing up a report – in both cases, installing a proxy is a great solution. VPNs have a very similar function but feature encrypted traffic flows, if you’re on the market for such a tool, we recommend ExpressVPN. We’d also recommend steering clear of free proxies as there have been notable instances of traffic logging and sale on the black market.

Now, when using proxies, servers outside your network can’t understand who the client is, so by the same token, individuals or companies using forward proxies may access material that would otherwise be banned in their country or office. That’s exactly how the ‘GreatFirewall of China’ works, and it’s also how you’re able to stream a season on Netflix that would otherwise be banned in your country. This is why your office should restrict software downloads and access to in-browser forward proxies. Otherwise, Dave is just going to fight fire with fire.

So generally speaking, forward proxies are used to filter or unfilter Web content (depending on which side of the fence you sit).

Reverse proxies

Now that we’ve explained how a forward proxy works, disguising the client’s identity from the server, you can probably guess that the reverse proxy works vice-versa; the client doesn't know what exact server it is contacting. That may be used, for example, to help you rebalance the bandwidth between your service’s servers. That way, you can connect users to servers with the lowest load, or the smallest ping. Or, if your servers have a lot of static data - such as js scripts or HTML files on your website, they can be cached on a proxy server. Big social networks use reverse proxies to distribute the traffic among users' locations and corresponding data centers. The end-user, in most cases, remains oblivious to your internal process.

Still, most webmasters use the NGINX Reverse Proxy as it allows you to easily pass any request to a proxied server, configure buffers and choose the outgoing IP address.

The benefits of using a reverse proxy for your backend infrastructure are very straightforward:

  • Load balancing – you’re able to set up your proxy server to choose the least loaded server each time a client makes a call. This will, of course, make the end-user experience incredibly smooth.
  • Caching – as mentioned before, if your users perform identical calls to your server, you can store some data on the proxy server instead of loading your servers with heaps of requests (however, don't forget that caching on a proxy can sometimes be dangerous, so approach with caution).
  • Isolating internal traffic – one of the best features indeed! You can run all your internal server architecture within a completely isolated DMZ. Also, it would remain a secret. All your port preferences, containers, virtual servers and physical servers shan’t be exposed to the outer world. This, of course, adds another layer of security to your infrastructure.
  • Logging – you can log all internal network events on your reverse proxy. This means that if one of your servers returns an error – you can query and debug it on your proxy server. Moreover, it allows you to monitor the overall performance of your infrastructure easily from a single node.
  • Canary Deployment – this means you can test some new features on only a selected percentage of users, You can also perform other AB tests. All of this allows you to significantly reduce risks when deploying updates to your service – your API calls are the same, the ports are the same, but your content is able to change dramatically.
  • Scalability – if you need more servers, set them up and add them to the list of proxied servers. That's as simple as it gets.

There are plenty of scenarios and use cases in which having a reverse proxy can make all the difference when looking to improve the speed and security of your corporate network. By providing you with a point at which you can inspect traffic and route it to the appropriate server, or even one where you may transform the request entirely, a reverse proxy can be used to achieve a variety of different goals.

Using forward and reverse proxies allows you to significantly simplify your internal infrastructure. Not only are you bound to increase efficiency by keeping Dave off of Facebook, but you’re also adding another layer of security for both your employees and your servers. Logging will allow you to track your network usage and debug certain issues. In addition, caching definitely offers a smoother and more consistent experience to your end-users. So, throw your doubts to the wind and get involved. After all, most of the other successful services do it too.

What is a Proxy Server and How Does it Work?

Nov 5, 2021 — 5 min read

Cryptography is both beautiful and terrifying. Perhaps a bit like your ex-wife. Despite this, it represents a vital component of day-to-day internet security; without it, our secrets kept in the digital world would be exposed to everyone, even your employer. I doubt you’d want information regarding your sexual preferences to be displayed to the regional sales manager while at an interview with Goldman Sachs, right?

Computers are designed to do exactly what we ask them to do. But sometimes there are certain things that we don’t want them to do, like expose your data through some kind of backdoor. This is where cryptography comes into play. It transforms useful data into something that can’t be understood without the proper credentials.

Let’s take a look at an example. Most internet services need to store their users’ password data on their own servers. But they can’t store the exact values that people input on their devices because, in the event of a data breach, malevolent intruders would effectively gain access to a simple spreadsheet of all usernames and passwords.

This is where ‘Hash’ and ‘Salt’ help us a lot. Throughout this article, we’re going to explain these two important encryption concepts through simple functions in Node.JS.

What is a ‘hash’?

A ‘hash’ literally means something that has been chopped and mixed, and originally was used to describe a kind of food. Now, chopping and mixing are exactly what the hash function does! You start with some data, you pass it through a hash function where it gets whisked and chopped, and then you watch it get transformed into a fixed-length value (which at first sight seems pretty meaningless). The important nuance here is that, contrary to cooking, an input always produces a corresponding output. For the purposes of cryptography, such a hash function should be easily computable and all values should be unique. It should work in a similar way to mashing potatoes – mashing is a one-way process; the raw potato may not be restored once it has been mashed. Indeed, the result of a hash function should be impenetrable to computer-led reverse engineering efforts.

These properties come in handy when you’re looking to store user passwords on a database – you don’t want anyone to know their real values.

Let’s implement a hash in Node.JS!

First, let’s import the createHash function from the built-in ‘crypto’ module:

const { createHash } = require ('crypto');

Next, we ought to define the module that we’re naming as the ‘hash’ (which takes a string as the input, and returns a hash as the output):

function hash(input) {
    return createHash();

We also need to specify the hashing algorithm that we want to use. In our case, it will be SHA256. SHA stands for Secure Hash Algorithm and it returns a 256-bit digest (output). It is important to architect your code so it is easy to switch between algorithms because at some point in time they won’t be secure anymore. Remember, cryptography is always evolving.

function hash(input) {
    return createHash('sha256');

Once we call our hashing function, we may call ‘update’ with the input value and return the output by calling ‘digest’. We should also specify the format of the output (e.g. hex). In our case, we’ll go with Base64.

function hash(input) {
    return createHash('sha256').update(input).digest('base64');

Now that we have our hash function, we can provide some input, and console log the result.

let youShallNotPassPass = 'admin1234';
const hashRes1 = hash(youShallNotPassPass);

Here’s our baby hash:

So, how can we use this long, convoluted string of numbers, letters, and symbols? Well, now it’s easy to compare two values while operating with only hashes.

let youShallNotPassPass = 'admin1234';
const hashRes1 = hash(youShallNotPassPass);
const hashRes2 = hash(youShallNotPassPass);
const isThereMatch = hashRes1 === hashRes2;
console.log(isThereMatch ? 'hashes match' : 'hashes do not match’)

As long as hash values are unique object representations, they can be useful for object identification. For example, they might be used to iterate through objects in an array or find a specific one in the database.

But we have a problem. Hash functions are very predictable. On top of that, people don’t use strong passwords that often, so the hacker may just compare the hashes on a database with a precomputed spreadsheet of the most common passwords. If the values match –  the password is compromised.

Because of this, it’s insufficient to just use a hash function to store unique ids on a password database.

And that’s where our second topic makes an entrance – Salt.

‘Salt’ is a bit like the mineral salt that you would add to a batch of mashed potatoes – the taste will definitely depend on the amount and type of salt used. This is exactly what salt in cryptography is – random data that is used as an additional input to a hash function. Its use makes it much harder to guess what exact data stands behind a certain hash.

So, let’s salt our hash function!

First, we ought to import ‘Scrypt' and ‘RandomBytes’ from the ‘crypto’ module:

const { scryptSync, randomBytes } = require('crypto');

Next, let’s implement signup and login functions that take ‘nickname’ and ‘password’ as their inputs:

function signup(nickname, password) { }
function login(nickname, password) { }

When the user signs up, we will generate a salt, which is a random Base64 string:

const salt = randomBytes(16).toString('base64');

And now, we hash the password with a 'pinch' of salt and a key length, which is usually 64:

const hashedPassword = scryptSync(password, salt, 64).toString('base64');

We use ‘Scrypt’ because it’s designed to be expensive computationally and memory-wise in order to make brute-force attacks unrewarding. It’s also used as proof of work in cryptocurrency mining.

Now that we have hashed the password, we need to store the accompanying salt in our database. We can do this by appending it to the hashed password with a semicolon as a separator:

const user = { nickname, password: salt + ':' + hashedPassword}

Here’s our final signup function:

function signup(nickname, password) {
    const salt = randomBytes(16).toString('base64');
    const hashedPassword = scryptSync(password, salt, 64).toString('base64');
    const user = { nickname, password: salt + ':' + hashedPassword};
    return user;

Now let’s create our login function. When the user wants to log in, we can grab the salt from our database to recreate the original hash:

const user = users.find(v => v.nickname === nickname);
const [salt, key] = user.password.split(':');
const hash = scryptSync(password, salt, 64);

After that, we simply check whether the result matches the hash in our database. If it does, the login is successful:

const match = hash === key;
return match;

Here is the complete login function:

function login(nickname, password) {
    const user = users.find(v => v.nickname === nickname);
    const [salt, key] = user.password.split(':');
    const hash = scryptSync(password, salt, 64).toString('base64');
    const match = hash === key;
    return match;

Let’s do some testing:

//We register the user:
const user = signup('Amy', '1234');

//We try to login with the wrong pass:
let isSuccess = login('Amy', '12345');
console.log(isSuccess ? 'Login success' : 'Wrong password!')

//Wrong password!
//We try to login with the correct pass:
isSuccess = login('Amy', '1234')
console.log(isSuccess ? 'Login success' : 'Wrong password!')

//Login success

Our example, hopefully, has provided you with a very simplified explanation of the signup and login process. It’s important to note that our code is not protected against timing attacks and it doesn’t use PKI infrastructure to check hashes, so there are plenty of vulnerabilities for hackers to exploit.

Cryptography itself can be described as the constant war between hackers and cryptographic engineers. Or, that familiar legal battle with your ex-wife over her maintenance payments. After all, what works today may not work tomorrow. A proof of MD5 hash algorithm vulnerability is a very good example.

So if your task is to ensure your users’ data privacy, be ready to constantly update your functions to counteract the recent ‘breakthroughs’.

What is password hashing and salting?

Nov 3, 2021 — 6 min read

Let's imagine that you decided to google ‘best sauces for Wagyu steak’. You went through several web pages, and then on page two of the search results, you get this notification from your Chrome browser:

Something went wrong, that's for sure. What happened? Should you proceed to the page without a private connection?

An IT expert would surely reply:

The error that you got here was probably because of an SSL/TLS handshake failure.

SSL? TLS?? Acronyms you’ve no doubt heard before, but ones that nevertheless evoke a dreary sense of confusion in the untrained mind. In this article, we’ll try to explain what SSL/TLS is, how it works and at the very least, you’ll understand what that lock icon on the address bar is.

Where did TLS originate?

TLS stands for Transport Layer Security, and it is right now the most common kind of Web PKI. It’s used not only to encrypt internet browsing but also for end-to-end connection (video calling, messaging, gaming, etc.).

As for now, we expect almost any kind of connection on the internet to be encrypted, and if something is encrypted, we get an alert similar to that seen in figure A. But that wasn't always the case. If you go back to the mid-90s – very little on the internet was encrypted. Maybe that was because fewer people were using the internet back then, or maybe it was because there weren’t credit-card details flying all over the place.

The history of TLS starts with Netscape. In 1994, it developed Secure Socket Layer 1 – the grandfather of modern TLS. Technically, it fits between TCP and HTTP as a security layer. While version 1 was used only internally and was full of bugs, very quickly, they fixed all the issues and released SSL 2. Then, Netscape patented it in 1995 with a view to stopping other people patenting it so they could release it for free. This was a very odd yet generous move, considering what the real-life patent practice was at that time.

In 1995, the world was introduced to Internet Explorer, a browser that used a rival technology called PCT (Private Communications Technology), which was very similar to SSL. But as with any rivalry – there could only be one winner. In November 1996, SSL 3 was released, which, of course, was an improvement on SSL 2. Right after that, the Internet Engineering Task Force created the Transport Layer Security Working Group to decide what the new standard for internet encryption would be. It was subsequently renamed from SSL to TLS (as far as we know, this was because Microsoft didn't want Netscape to have dibs on the name). It actually took three years for the group to release TLS 1. It was so similar to SSL 3 that people began to name it SSL 3.1. But over time, through updates, the security level rose massively; bugs were terminated, ciphers were improved, protocols were updated etc.

But, how does it all actually work?

TLS is a PKI protocol that exists between two parties. They effectively have to agree on certain things to identify each other as trustworthy. This process of identification is called a 'handshake'.

Let’s take a look at a TLS 1.2 handshake, as an example.

First, let's load any webpage, then, depending on your browser, press the lock icon near the web address text field. You’ll be shown certificate info and somewhere between the lines you'll find a string like this:

This is called a Cipher Suite. It’s a string-like representation of our 'handshake' recipe.

So, let’s go through some of the things shown here:

  • First, we have ECDHE (Elliptic-curve Diffie–Hellman), which is a key agreement protocol that allows two parties, each having an elliptic-curve public–private key pair, to establish a shared secret over an insecure channel. In layman’s terms, this is known as key exchange;
  • The RSA is our Public Key authentication mechanism (remember, we need a Public Key for any PKI);
  • AES256 refers to the cipher that we’re going to use (AES) and its' key size (256);
  • Lastly, SHA384 is effectively a building block that is used to perform hash functions.

Now, the trick is to exchange all that data in just several messages via our 'handshake'.

What exactly happens when we go to a new web page?

After we establish a TCP (Transmission Control Protocol) connection, we start our handshake. As always on the web, the user (Client) is requesting data from the Server – so he sends a 'Client Hello' message, which contains a bunch of data including:

  • The max TLS version that this Client can support so that both parties are able to 'speak the same language;
  • A random number to protect from replay attacks;
  • List of the cipher suites that the Client supports.

Assuming the Server is live, it responds with 'Server Hello', containing the Cipher Suite and TLS version it chose to connect with the Client + a random number. If the server can't choose a Suite or TLS version due to version incompatibility – it sends back a TLS Alert with a handshake failure. At this point, both the User and the Server know the communication protocol.

Keep in mind that the server is sending a Public key and a Certificate containing an RSA key. It’s important to know that the Certificate has an expiration date. You’ll understand why by the end of the article.

On top of that, the Server is sending a Server Key Exchange Message containing parameters for ECDHE with a public value. Very importantly, this Exchange Message also contains a digital signature (all previous messages are summarized using a hash function and signed using the private key of the Server). This signature is crucial because it provides proof that the Server is who they say they are.

When the Server is done transmitting all the above-mentioned messages, it sends a 'Server Hello Done' message. In Layman’s terms, that’s an ‘I’m done for the day, I’ll see you at the pub’ kind of message.

The Client, on the other hand, will look at the Certificate and verify it. After that, it will verify the signature using the Certificate (you can't have one without the other). If all goes well, the Client is assured of the Server’s authenticity and sends a Client Key Exchange Message. This message doesn't contain a Certificate but does contain a Pre-master Secret. It is then combined with the random numbers that were generated during the ‘Hello’ messages to produce a Master Secret. The Master Secret is going to be used for encryption at the next step.

It may seem very complicated now, but we’re almost done!

The next stage involves the Client sending the ‘Change Cipher Spec’ message, which basically says "I’ve got everything, so I can begin encryption – the next message I'll send you is going to be encrypted with parameters and keys".

After that, the Client proceeds to send the ‘Finished’ message containing a summary of all the messages so far encrypted. This helps to ensure that nobody fiddled with the messages; if the Server can't decrypt the message, it leaves the 'conversation'.

The Server will reply in the same way – with a Change Cipher Spec and a Finished message.

Handshaking is now done, parties can exchange HTTP requests/responses and load data. By the way, the only difference between HTTP and HTTPS is that the last one is secure – that's what the 'S' stands for there.

As you can see, it's incredibly difficult to crack this system open. However, that's exactly what we need to ensure security. Moreover, those two round trips that the data travels take no time at all, which is great; nobody wants their GitHub to take a month and a half to load up. By the way, the more advanced TLS 1.3 does all that in just one round trip!

Your connection is not private

When something goes wrong with TLS, you’ll see the warning that we demonstrated at the very beginning of this article. Usually, those are issues associated with the Certificate and its expiration date. That’s why your internet will refuse to work if you’ve messed around with the time and date settings on your device. But, if everything with the date and time is in check – never proceed to a website that triggers this warning, because most likely, between you and the server, somebody is parsing your private data.

What is Transport Layer Security (TLS) & how does it work?

Nov 2, 2021 — 4 min read

Let’s imagine that somehow you’re in the driver’s seat of a start-up, and a successful one too. You’ve successfully passed several investment rounds and you’re well on your way to success. Now, big resources lead to big data and with big data, there’s a lot of responsibility. Managing data in such a company is a struggle, especially considering that data is usually structured in an access hierarchy – Excel tables and Google Docs just don’t cut the cake anymore. Instead, the company yearns for a protocol well equipped to manage data. The company yearns for LDAP.

What is LDAP?

The story of LDAP starts at the University of Michigan in the early 1990s when a graduate student, Tim Howes, was tasked with creating a campus-wide directory using the X.500 computer networking standard. Unfortunately, accessing X.500 records was impossible without a dedicated server. Additionally, there was no such thing as a ‘client app’. As a result, Howes co-created DIXIE, a directory client for X.500. This work set the foundations for LDAP, a standards-based version of DIXIE for both clients and servers – an acronym for the Lightweight Directory Access Protocol.

It was designed to maintain a data hierarchy for small bits of information. Unlike ‘Finder’ on your Mac, or ‘Windows Explorer’ on your PC, the ‘files’ inside the directory tree, although small, are contained in a very hierarchical order – exactly what you need to organize, for example, your HR structure, or when accessing a file. Compared to good old Excel, it is not a program, but rather a protocol. Essentially, a set of tools that allow users to find the information that they need very quickly.

Importantly, this protocol answers three key questions regarding data management:

Who? Users must authenticate themselves in order to access directories.
How? A special language is used that provides for query or data manipulations.
Where? Data is stored and organized in a proper manner.

Let’s now go through these key questions in greater detail.


It’s bad taste to provide internal data to any old Joe. That’s why LDAP users cannot access information without first proving their identity.

LDAP authentication involves verifying provided usernames and passwords by connecting with a directory service that uses the LDAP protocol. All this data is stored in what is referred to as a core user. This is a lot like logging into Facebook, where you’re only able to access a user’s feed and photos if they’ve accepted your friend request, or if their profile has been set to public.

Some companies that require advanced security use a Simple Authentication and Security Layer (SASL), for example, Kerberos, for the authentication process.

In addition, to ensure the maximum safety of LDAP messages, as soon as data is accessed via devices outside the company’s walls, Transport Layer Security (TLS) may be used.


The main task of a data management system is to provide “many things to many users”.

Rather than creating a complex system for each type of information service, LDAP provides a handful of common APIs (LDAP commands) to do this. Supporting applications, of course, have to be written to use these APIs properly. Still, the LDAP provides the basic service of locating information and can thus be used to store information for other system services, such as DNS, DHCP, etc.

Basic LDAP commands

Let’s look at the ‘Search’ LDAP command as an example, if you’d like to know which group a particular user is a part of, you might need to input something like this:


Isn’t it beautiful? Not quite as simple as performing a Google search, that’s for sure. So, your employees will perform all their directory services tasks through a point-and-click management interface like Varonis DatAdvantage.

All those interfaces may vary depending on their configuration, which is why new employees should be trained to use them, even if they’ve used LDAP before.


As we mentioned before, LDAP has the structure of a tree of information. Starting with the roots, it contains hierarchical nodes relating to a variety of data, by which the query may then be answered.

The root node of the tree doesn't really exist and can't be accessed directly. There is a special entry called the root directory specific entry, or rootDSE, that contains a description of the whole tree, its layout, and its contents. But, this really isn't the root of the tree itself. Each entry contains a set of properties, or attributes, in which data values are stored.

The tree itself is called the directory information tree (DIT). Branches of this tree contain all the data on the LDAP server. Every branch leads to a leaf in the end – a data entry, or directory service entry (DSE). These entries contain actual records that describe objects such as users, computers, settings, etc.

For example, such a tree for your company could start with the description of a position held, starting with you at the top as the director, finishing at the bottom with Joe Bloggs, the intern.

Each position would be tied to a person with a set of attributes, complete with links to subordinates. The attributes for a person may include their name, surname, phone number, email, in addition to their responsibilities. Each attribute would have a value inside, like ‘Joe’ for name and ‘Bloggs’ for surname.

The actual data contents may vary, as they totally depend on use. For example, you could have data issuing rights to certain people regarding the coffee machine. So, no Frappuccino for our intern Joe.

Sure, you can add more sophisticated data regarding each individual – their personal family trees, or even voice samples for instance, but typically, the LDAP would just point to the place where such data can be found.

Is it worth it?

LDAP is able to aggregate information from different sources, making it easier for an enterprise to manage information. But as with any type of data organization, the biggest difficulty is creating a proper design for your tree. There is always trial and error involved while building a directory for a specific corporate structure. Sometimes this process is so difficult that it even results in the reorganization of the company itself in favour of the hierarchical model. Despite this, for almost thirty years, the LDAP has held its title as the most efficient solution for the organization of corporate data.

What is LDAP and how does LDAP authentication work?

Oct 26, 2021 — 5 min read

Imagine you’re a system administrator at Home Depot. Just as you’re about to head home, you notice that your network has just authorized the connection of a new air-conditioner. Nothing too peculiar, right? The next morning, you wake up to find that terabytes of data including logins, passwords and customer credit card information have been transferred to hackers. Well, that’s exactly what happened in 2014, when a group of hackers, under the guise of an unassuming HVAC system, landed an attack that cost Home Depot over $17.5 million dollars, all over an incorrectly configured PKI. In this article, we’ll be conducting a crash course in PKI management.

So, what’s a PKI?

‘Public key infrastructure’ is a term that relates to a set of measures and policies that allow one to deploy and manage one of the most common forms of online encryption – public-key encryption. Apart from being a key-keeper for your browser, the PKI also secures a variety of different infrastructures, including internal communication within organizations, Internet of Things (IoT), peer to peer connection, and so on. There are two main types of PKIs:

•   The Web PKI, also known as the “Internet PKI”, has been defined by RFC 5280 and refined by the CA/Browser Forum. It works by default with browsers and pretty much everything else that uses TLS (you probably use it every day).

•   An Internal PKI – is the one you run for your own needs. We’re talking about encrypted local networks, data containers, enterprise IT applications or corporate endpoints like laptops and phones. Generally speaking, it can be used for anything that you want to identify.

At its core, PKI has a public cryptographic key that is used not to encrypt your data, but rather to authenticate the identities of the communicating parties. It’s like the bouncer outside an up-market club in Mayfair – you’re not getting in if you’re not on the list. However, without this ‘bouncer’, the concept of trustworthy online communication would be thrown to the wind.

So, how does it work?

PKI is built around two main concepts – keys and certificates. As with an Enigma machine, where the machine’s settings are used to encrypt a message (or establish a secure protocol), a key within a PKI is a long string of bits used to encrypt or decrypt encoded data. The main difference between the Enigma machine and a PKI is that with the latter, you have to somehow let your recipient know the settings used to encode the encrypted message.

The PKI gets its name because each party in a secured connection has two keys: public and private. A generic cipher protocol on the other hand, usually only uses a private one.

The public key is known to everyone and is used throughout the network to encode data, but the data cannot be accessed without a private key, which is used for decoding. These two keys are bound by complex mathematical functions which are difficult to reverse-engineer or crack by brute force. By the way, this principle is an epitome of asymmetrical cryptography.

So, this is how data is encrypted within a public key infrastructure. But let’s not forget that identity verification is just as important when dealing with PKIs – that’s where certificates come into play.

Digital Identity

PKI certificates are most commonly seen as digital passports containing lots of assigned data. One of the most important pieces of information in such a certificate relates to the public key: the certificate is the mechanism by which that key is shared – just like your Taxpayer Identification Number (TIN) or driver’s license, for instance.

But it’s not really valid unless it has been issued by some kind of entrusted authority. In our case, this is the certificate authority (CA). Here, there is an attestation from a trusted source that the entity is who they claim to be.

With this in mind, it becomes very easy to grasp what the PKI consists of:

•      A certificate authority, which issues digital certificates, signs them with its public key and stores them in a repository for reference;

•      A registration authority, which verifies the identities of those requesting digital certificates. A CA can act as its registration authority or can use a third party to do so;

•      A certificate database that stores both the certificates, their metadata and, most importantly, their expiration dates;

•      A certificate policy outlining the PKI's procedures (this is basically a set of instructions that allows others to judge how trustworthy a PKI is).

What is a PKI used for?

A PKI is great for securing web traffic – data flowing through the open internet can be easily intercepted and read if it isn't encrypted. Moreover, it can be difficult to trust a sender’s identity if there isn’t some kind of verification procedure in place.

But even though SSL/TLS certificates (that secure browsing activities) may demonstrate the most widespread implementation of PKI, the list doesn’t end there. PKI can also be used for:

•       Digital signatures on software;

•       Restricted access to enterprise intranets and VPNs;

•       Password-free Wifi access based on device ownership;

•       Email and data encryption procedures.

PKI use is taking off exponentially; even a microwave can connect to Instagram nowadays. This emerging world of IoT devices brings us new challenges and even devices seemingly existing in closed environments now require security. Taking the ‘evil air conditioner’ that we spoke about in the introduction as an example – gone are the days where we can take a piece of kit for face value. Some of the most compelling PKI use cases today centre around IoT. Auto manufacturers and medical device manufacturers are two prime examples of industries currently introducing PKI for IoT devices. Edison’s Electronic Health Check-up System would be a very good example here, but we’ll save that for a future deep-dive.

Is PKI a cure-all?

As with any technology – execution is sometimes more important than the design itself. A recent study by the Ponemon Institute surveyed nearly 603 IT and security professionals across 14 industries to understand the current state of PKI and digital certificate management practices. This study revealed widespread gaps and challenges, for example:

•       73% of security professionals admit that digital certificates still cause unplanned downtime and application outages;

•       71% of security professionals state that migration to the cloud demands significant changes to their PKI practices;

•       76% of security professionals say that failure to secure keys and certificates undermines the trust their organization relies upon to operate.

The biggest issue, however, is that most organizations lack the resources to support PKI. Moreover, only 38% of respondents claim they have the staff to properly maintain PKI. So for most organisations PKI maintenance becomes a burden rather than a cure-all.

To sum up, PKI is a silent guard that secures the privacy of ordinary online content consumers. However, in the hands of true professionals, it becomes a power tool that creates an encryption infrastructure that is almost infinitely scalable. It lives in your browser, your phone, your wifi access point, throughout the web and beyond. Most importantly, however, a correctly-configured PKI is the distance between your business and an imposter air conditioner that wants your hard-earned cash.

What is PKI? A Public Key Infrastructure definitive guide

Oct 11, 2021 — 3 min read

If you’re reading about password managers for the first time, you’re probably wondering why such a tool exists. Well, to help you out, we’ve compiled a ‘top three’ for password-related company pains. Moreover, we’ll illustrate how a password manager, like Passwork, can help you and your business to work smart; minimizing stress, maximizing safety.

Digitalisation and Human Error

First and foremost, COVID-19 forced the entire world into rapid digitalization. Whilst this digitalization has had a number of positive effects, it has also had the unfortunate impact of significantly increasing the number of attack vectors that bad actors can aim for. Imagine an entire wave of old-school office managers in their fifties buying laptops and working from home for the first time. Of course, it’s no surprise that cyberattacks are on the rise; in 2020 alone, 75% of organizations across the world experienced some kind of phishing attack.

Human error, in this sense, needn't be accepted when you’ve got a password manager on hand. When you access a website that requires a password stored by your password manager, it is presented automatically by the software. However, this only occurs when the webpage is authentic. Phishing attacks rely on false web pages that mimic the real thing, and whilst this may trick a user, a password manager can tell fact from fiction, every time. When the user doesn’t see his password pop up on the screen, he knows that he’s not in the right place. But what if the user decides to put in the password manually? Well, password managers are on the case; oftentimes, complex and unmemorable passwords are generated, and users are forced to rely on auto-input. Human error? Eliminated.

Here you can download posters about passwords, identity, malware and more. CyberPilot invites you to print them out and hang them up in the office to remind employees of the importance of good digital habits.

Security audit preparation

The benefits of password managers don’t end there; they also significantly boost your chances of passing a security audit. This is mainly down to two factors. Firstly, software including and similar to Passwork will alert you if your passwords are old, weak or compromised, which in turn encourages you to change them. In doing so, you’re bumping up your level of security, and you’re getting well prepared for that upcoming audit. Secondly, and more importantly, password managers minimize human involvement wherever possible, and as has already been established, this is key to secure operations. For example, if an employee leaves your company, Passwork will alert you to the fact that the passwords accessible to the employee have been compromised and must be updated as soon as possible to prevent any security breaches. The increased automation of this process reduces manual, human involvement that may otherwise interfere with or obstruct the updating of passwords. This improves password hygiene, which in turn improves security audit results.

Annual leave and unexpected absences

On a similar note, temporary staff absences, rather than permanent personnel changes, can also cause problems for companies without password management software. If an employee is absent, be it due to illness or holiday leave, then they take their passwords with them. This leaves everything that is protected by those passwords relatively inaccessible to the rest of the company, considerably disrupting day-to-day operations. Let’s say you’ve gone away for a relaxing beach holiday in Bali. It’s unlikely that you’ll be on your phone responding to Sharon from accounting, who needs the password to your email account, right? Well, Sharon can’t access an important PDF that was sent by a client last week, so she has to ask them to send it again. It’s inefficient, unprofessional, and will cost your company time and money.

However, this is an issue that password managers can solve; with solutions such as Passwork, there is no need for employees to commit passwords to memory, or write them down in places known only to them, because they are stored by Passwork, either on your local server or in the cloud. This means that they can be accessed by anybody with the appropriate security clearance. Business can run smoothly, regardless of staff absences, and you can get back to reading Sapiens with your mojito.

Tools such as Passwork are simply a necessity in the modern world. A company’s security is significantly weaker without them, and the brain, albeit a remarkable tool, is one that is notoriously prone to mistakes. If you want your organisation to reach its full potential, your operations must be as streamlined and efficient as possible, and you can only achieve this by using the most up-to-date password management software. Moreover, solutions such as Passwork come in a variety of sizes. If you’re reading this, wondering whether your business is even big enough for such infrastructure, think again; you’ll find a solution here perfectly tailored for the size and weight of your company.

Why do I need a password manager?

Aug 30, 2021 — 7 min read

Information technology is developing by leaps and bounds. There are new devices, platforms, operating systems, and a growing range of problems, which need to be solved by developers.

But, it’s not so bad—new development tools, IDEs, new programming languages, methodologies, etc., rush to help programmers. The list of programming paradigms is impressive, and with the modern multi-paradigm PL (e.g., C#), it is reasonable to ask: «What is the best way to handle it? What to choose?»

Let’s try to figure this answer out.

From where did so many paradigms come?

In fact, the answer has already been stated in this post—different types of tasks are now easier and faster to solve, using the appropriate paradigm. Accordingly, new types of problems have appeared with the IT evolution (or the old ones become relevant), and solving them using the old approach is not suitable and it is inconvenient, which resulted in rethinking things and the development of new techniques.

What to choose?

Everything depends on what is required. It is worth noting that all the development tools are different. For example, PHP with a «standard» set of modules does not support aspect-oriented programming. Therefore, the choice of methodology is quite closely linked to the development platform. Oh, and do not forget that you can combine different approaches, which leads us to the choice of stacking paradigms.

For paradigm categorization, I used to use four dimensions that are inherent in almost any task:

Any program somehow works with data: stores, processes, analyzes, reports.

Any program should do something—the action is usually connected with the data.

Logic or business logic defines the rules that govern the data and actions. Without the program, logic does not make sense.

How the program interacts with the outside world.

We can go further and get deeper into this idea to come up with quality characteristics for these four measures, creating strict rules and adding in a little math, but this is perhaps a topic for another post. I think most system architects determine the characteristics of the data for a specific task on the basis of their knowledge and experience.

Once you analyze your problem to these four dimensions, it is likely that you will see that a certain dimension is expressed stronger than the other. And this in turn will determine the programming paradigm, as they usually focus on some singular dimension.

Consider this example

Orientation to the data (Data-driven design)

Data is in the consideration, rather than how they are related to each other

Types of suitable applications:

1. Grabbers/crawlers (collect data from different sources, save somewhere)
Various admin interfaces to databases; everything with a lot of simple CRUD operations.
2. Cases, when the resource is already defined, for example, it requires the development of a program and the database already exists and you can’t change the schema of the data. In this case, it may be easier to focus on what is already done, rather than creating additional wrappers over data and data access layers. Using an ORM often leads to data-driven, but it is impossible to say in advance if it is good or bad (see below).

Orientation to actions—imperative approaches to development

Event-driven Programming, Aspect-oriented Programming, etc.

Orientation to logic: Domain-driven design (DDD) and everything connected with it

Here we have an important subject area task. We pay attention to the modeling of objects, analysis of the relationships and dependencies. This is mainly used in business applications, and it is a declarative approach, and partly functional programming (tasks that are well described by mathematical formulas) is a part of DDD as well.

Orientation on the interface

Used when it is first important as the program interacts with the outside world.
An applications development with a focus only on the interface is a situation that is quite rare. Although some of the books I’ve read mention the fact that such an approach was considered seriously, and is based on the user interface, meaning it takes what the user sees directly and, on this basis, designs the data structures and everything else.

Orientation to the user interface in business applications is often manifested indirectly. For example, the user wants to see the specific data that is difficult to obtain due to what additional structures the architecture acquires (e.g., forced redundancy data). Formally, event-driven programming is included here.

What about real life?

Based on my experience, I can say that two approaches are indicated: focus on data (data-driven) and focus on logic (domain-driven). In fact, they are competing methodologies, but in practice can be combined in symbiosis, which is often known as anti-patterns.

One of the advantages of data-driven over domain-driven is the ease of use and implementation. Therefore, data-driven is used where it is necessary to apply the domain-driven (and often this happens unconsciously). Problems arise from the fact that the data-driven is hardly compatible with the concepts of object-oriented programming (of course, if you do use OOP). In small applications, these problems are almost invisible. In medium-sized applications, these problems are already visible and begin to lead to anti-patterns. On major projects, problems become serious and require appropriate action.

In turn, domain-driven wins on major projects, but on small, it complicates the solution and requires more resources for development, which is often critical in terms of business requirements (to bring the project to the market «asap», for a small budget).

To understand the differences in the approaches, consider a more concrete example. Suppose we want to develop a system of accounting for sales orders. We have things such as:

1. Product
2. Customer
3. Quote
4. Sales Order
5. Invoice
6. Purchase Order
7. Bill

Deciding what the scope is at a glance, we begin to design the database. Create the appropriate tables, run the ORM, generate essential classes (well, in the case of a smart ORM, put this scheme somewhere separately, for example, XML, and generate a database and essential classes). Finally, we get an independent class for each essence. Enjoy life; it’s easy and simple to work with objects.

Time passes, and we need to add additional logic in the program, for example, to find the products with the highest price. There may already be a problem if your ORM does not support external communication (i.e., essential classes do not know anything about the context of the data). In this case, it is necessary to create a service, which is a method, to return the suitable product for the order. But our good ORM can work with external relations, and we simply add a method to the class order. Enjoy life again; the goal is achieved, the method is added in a class, and we have almost the real OOP.

Time passes, and we need to add the same method for the quote, for the invoice, and for other similar entities. What to do? We can simply add this method in all classes, but it will, in fact, code duplication and backfire with the support and testing. We want to avoid complicating and simply copy methods in all classes. Then there are similar methods, and the essential classes begin to swell with the same code.

Time passes, and there is a logic that can’t be described by the external connections in the database. In this case, there is no way to place it in an essential class. We begin to create services that perform these functions. As a result, we find that the business logic is scattered by essential classes and services, and understanding where to look for the correct method is becoming increasingly difficult. Decide to refactor and to move out repetitive code in services—highlight common functionality into the interface (for example, make the interface IProductable, i.e., something that contains products). Services can work with these interfaces, thereby winning a little in abstraction. But it does not fundamentally solve the problem; we get more methods in the services and solutions for the unity of the painting techniques to transfer all the essential services in the classes. Now we know where to look for methods, but our essential classes lost any logic, and we got the so-called «anemic model.»

At this stage, we are completely gone from the concept of OOP—the object stores only the data, all the logic is in separate classes, and there is no encapsulation and no inheritance.

It is worth noting that this is not as bad as it may seem—nothing prevents implementing unit testing and the development through testing (TDD) to integrate dependency management patterns (IoC, DI), etc.. In short, we can live with it. Problems arise when the application will grow large—when we get so many entities that it’s unrealistic to keep in mind. In this case, the support and development of such an application would be a problem.

As you have probably guessed, this scenario describes the use of the data-driven approach and its problems.
In the case of domain-driven, we would proceed as follows. Firstly, there is no database designing in the first stage. We would need to carefully analyze the problem domain context, model it, and move on to OOP language.

For example, we can create an abstract model of the document, which would have a set of basic properties. Inherit from this a document that has the products, to inherit from this a «payment» document, with the price and billing address, and so on. With this approach, it’s pretty easy to add a method that finds the most expensive product—we just add it to the appropriate base class.

As a result, the scope of the problem will be described using the OPP to the fullest.
But there are obvious problems: how to store data in the database? Actually, it will require the creation of a function for mapping data from models to the fields in the database. Such a mapper can be quite complex, and when you change models, you also need to change the mapper.

Moreover, you are not immune from errors in the modeling, which can lead to complex refactoring.

Data-driven vs Domain-driven



1. Allows you to quickly develop an application or prototype
2. Convenient to design (code generation, scheme, etc.)
3. Can be a good solution for small or medium-sized projects


1. Can lead to anti-patterns and loss of the OOP
2. Leads to chaos on large projects, complex support, etc.



1. Use the power of OOP
2. Allows you to control the complexity of the scope (domain)
3. There are a number of advantages that are not described in the article, for example, the creation of the domain of language and the use of BDD
4. Provides a powerful tool for developing complex and large solutions


1. Requires significantly more resources for the development, which leads to greater solutions cost
2. Certain parts are becoming harder to support (mapper data, etc.)

So, what the hell should I choose?

Unfortunately, there is no single answer. Analyze your problem, resources, prospects, goals, and objectives. The right choice is always a compromise.

Application design: Data-driven vs Domain-driven

Aug 30, 2021 — 3 min read

Positioning is an important aspect

Positioning is so important that, if this stage is skipped, all other efforts of promotion of the product could be ruined. Good positioning should be short, clear, and understandable. Therefore, it is often described in one sentence or is made to fit in a tweet. Positioning should be directly related to the main problem that the product will solve for the users.

The difficulty is that many a-times, the product generally solves various problems for different users. For example, an online accounting system providing the same capabilities solves different problems for the entrepreneur and accountant. As to the question «Who are you?», different answers may be given, depending on who is posing the question. Positioning is closely related to the target audience segmentation. Often, I have heard startups saying that their product is made for all users of the Internet, or something like that, which is definitely an outlandish type of segmentation.

Begin to break your users into segments. Try implementing any different characteristics like gender, age, income level, interests, etc. The task to break all users into segments is performed in such a way that all were uniform within a single segment. That is, from the perspective of the product, all users of one segment are like twins and are indistinguishable to a significant extent.

Take one segment user and tell him about the product. Then afterward, take any other user of the same segment, and their stories about the product should be similar. Begin to divide users from the largest to smallest. Specify what problem your product solves for each segment. If more than one problem is obtained from a single segment, the segment shall be divided into further sub-segments.

As a result, you should get:

1. The segment and its characteristics (feature set)
2. The problem is solved by the users of that segment
3. Positioning for this segment. In this case, the product can be position in a single sentence.

From this scheme, you automatically get ready-made advertising campaigns for Yandex, VKontakte, Google, Twitter. And you can understand where to look for leads and what attraction channels to use based on segment performance. By looking at the segments tree, you can go in the opposite direction, summarize a number of problems, and get the main product positioning. And a detailed list will be a good start for the development of the Landing page.

Take, for example, the development of websites, the likes of ‘heavily worked-on websites, will soon be performed using innate abilities.

For whom?

1. For all the sites which may need this? — Well, yes.
2. For business? — Yeah right!
3. For business owners that have heard something about the Internet, interested in finding customers? — Getting Warmer
4. Does this business have a site?
5. Is this a recently established business?
6. Volumes (for example, how many employees)
7. Lines of business

Eventually, we obtain a segment such as the following:

1. Recently established company (6-12 months old)
2. with a small staff (10-20 people).
3. Recently launched, no site at the moment hasn’t met targets as yet, and the like,
4. are not willing to devote a lot of money on the development of the site
5. On the question of whether or not a site is needed, the most likely answer is, «Well, of course, it is needed»
6. Does not plan to actively attract customers via the Internet.

The situation is one of those «websites needed, well so that we owned one.» The problem is that this is not represented on the Internet Positioning: «We are setting up a website for business start-ups on the Internet for 1 week and for so many rubles.» This is not ideal, but the point is clear. Offering affordable, or perhaps typical or conventional solutions, Landing pages, with minimal customization, cheap but good and fast. It is worth noting that in the site-building world, young companies just need to be positioned and well-niched. Sites solve many problems and are needed for numerous problems to be solved, so cramming the site under a one-size-fits-all does not work.

So, in summary:

1. Divide your customers into segments as per the problems that your product solves. The better the homogeneity of the segment, the better the result (but without fanaticism)

2. Check № 1 — segment should easily explain what you have to offer. Without any "and"s and "or"s.

3. Checking № 2 — product positioning for the segment, tagline, the main message, are all to be contained within the tweet.

4. Segmentation and positioning are closely linked. One may be used to create the others, and vice versa.

5. Segmentation and positioning give insight into what customers to look for, how to look, where to look for them, and what particular offer to give them. You can write a statement for Sales.

6. Structuring allows you to identify the main problems to be solved by the product, and a host of other artifacts that can be used, for example, for setting up a Landing page.

Market segmentation

Aug 30, 2021 — 3 min read

After initializing their first releases or MVP (first minimal product-release), startups are faced with the challenge of promoting and marketing on the Internet. If they do this through Yandex or Google AdWords, a single click designed to attract customers can cost $1 or more, and the cost of publishing a single article onto a popular media site can be more than $ 1,000 per campaign. New startups, even when heavily propped-up investment-wise, and enjoying huge financial backing, cannot afford to walk on such a grand and costly scale.

Or they can, but here is a well-established fact: all funds are consolidated only after a couple of months; whether there were any sales or not, and they ran out of money for further product development.

Very often, startup beginners say: "We do not know how much we need for marketing. How do we evaluate this?", or: "We need ten million on marketing, hmm, no, let's make it twenty ...".

Then, they quickly start buying ads and spending money in vain, and slowly begin to realize that marketing is one hell of a very expensive endeavor. You may have a great product, everyone may like it, everyone may be dying to use it, but then, it is necessary to spend a lot of money on advertising. Well, afterwards, they start looking for an investment specifically for marketing.

So, rounding up: marketing and promotion of IT-startups on the Internet proves to be expensive, unclear and unpredictable.


In fact, the promotion of start-ups is actually understandable and predictable. Whether it will be costly or not depends on each particular situation.

It is really quite possible to promote IT-startups using only little or no financial investment at all. In this case, impressive and surprising results can be achieved.

In the open world, these techniques are called growth hacking

And, just a small faq on explosive promotions:

1. Wow, is it really possible for any project to be able to directly unleash a host of ads without money, and without doing anything at all?
− No, not any project. A lot of work still has to be done. But for many projects, not a lot of money is spent.

2. So, if not ANY project, then which ones?
− First of all, IT-startups or projects that are well-integrated on the internet. But in general, the general principles apply everywhere.

3. So, purchasing advertising space will be a thing of the past?
− Yes and no. Advertising will always be a great help if you can afford it, and sometimes, may be the only option.

4. Do I still have to learn about marketing or hire a marketer?
- You needs to know the fundamentals. Growth hacking and marketing are related, but not deeply. This a kind of side-approach, where your brains, simple logic, entrepreneurial skills; all really matter. Certainly not misgivings.

How growth hacking works

The basic idea of the explosive product is very simple and logical. If your product is good, its users will talk about it themselves. They only need help to do that. Practically almost all the techniques can be reduced to one single aim: to increase virality without any cash expenditures.

The obvious pre-condition is that your product really needs to solve user problems, should be necessary, useful and convenient. You must have a high-quality product.
I am sure that all startups find their desired product, but this is not always the case.

Explosive advertising works well for IT-startups, because users can easily talk about it and attract other users, the internet is there to help. In addition, explosive advertising is a pack of little tricks that helps you get close to your audience; motivate, talk about the product, test ideas, increase conversion, and so on.

In summary,

1. "Standard way" - more money into advertising, greater audience reach, more new users.

2. Explosive advertising - users themselves are involved together with their friends. Avalanche exponential growth. Quality product needed.

3. Divide all your customers into particular segments according to the problems that your product solves. The more homogeneous the segment, the better (but without fanaticism)

4. Check №1 - It is very easy to explain what you have to offer if this is done segment by segment. Without any "and" and "or".

5. Checking №2 - Product positioning for the segment, tagline, the main message should all be contained within the tweet.

6. Segmentation and positioning are closely linked. One can lead to the other, and vice versa.

7. Segmentation and Positioning give insight into what customers are looking for, how to look, where to look for that particular offer. You can write a statement for sales.

8. Structuring allows you to identify the main problems to be solved by the product, and a host of other artifacts that can be used, for example, to create a Landing.