In this blog series , We will learn about encryption and decryption basics in a very casual fashion . We will start discussing from origin of cryptography and then learns about the modern techniques
One of the important and main tech in encryption is Cryptography
Cryptography is a tech which is very very common and which is used in everywhere right now weather any software as follows :
Routers we use in our homes
Whatsapp we use to send receive messages
Any site we are opening on https
……. And so on ..
Now , At the same time cryptography is very common , it is extremely complex also . Complex in understanding , implementing and even using it correctly .
Need of Cryptography:
To answer this let’s answer the following
Why we need password on our phone
Why to lock the door of our home when going outside and take keys with ourselves
Same is answer to use cryptography – To assures that the sender or receiver is the right one and they can only see the right form of data
To understand how it works let’s start from old age method known as : Caesar Cipher
Caesar Cipher
The Caesar Cipher is a type of substitution cipher in which each letter in a plaintext is shifted a certain number of places down the alphabet. This simple encryption technique is named after Julius Caesar, who used it to encrypt messages sent to his officials. The process of encryption is simple, a shift value ( also known as key) is chosen and each letter of the plaintext is shifted by that number of positions down the alphabet.
For example, with a shift of 3, A would be replaced by D, B would be replaced by E, and so on.
Plaintext: Sahil
Shift(Key): 3
Ciphertext: Vdklo
To decrypt the message, the shift value is used in the opposite direction.
Ciphertext: Vdklo
Shift(Key): -3
Plaintext: Sahil
It is important to note that the Caesar Cipher is very easy to break and should not be used for any serious encryption purposes as it can be broken by simple methods such as frequency analysis.
Here Algorithm is Simple: shift a letter N number of times and then replace it with that letter and Continue this process till our plain text will be Converted into cipher text .
A Caesar Cipher table is a tool that can be used to manually encrypt and decrypt messages using the Caesar Cipher method. It is a table that lists all of the letters of the alphabet and their corresponding encrypted or decrypted letters, based on a chosen shift value (or key).
Here is an example of a Caesar Cipher table with a shift(Key) value of 3:
Plaintext
Ciphertext
A
D
B
E
C
F
And so on …
With Caesar cipher, someone can easily reverse engineer this Caesar cipher text and decode the messages encrypted using this algorithm by identifying the pattern , one can easily determine the key . This was just an example to get the readers familiar with the cryptography and encryption decryption concepts .
Till Now we have heard about some important terms
Cryptography
Algorithm
Plain Text
Key
Cipher Text
Please keep them in mind as these are the generic terms used everywhere in world of encryption and decryption .
In next Blog we will gossip about some other things like
Making strong Algo as compare to caesar cipher
What is latest Algo used now a days
What type of Cryptography Techniques are present and when one should be used
In this Blog, we will see one of the issues and solutions which I found when one one of our production servers , our java application is becoming slow due to more gc pauses .
I will explain this particular approach which can be one of the reasons for initiating more gc pauses .
To understand this article , one must have basic knowledge of G1GC algorithm of java for garbage collection .
Don’t worry if you don’t have knowledge of G1GC , I will make articles on basics of G1GC later and then you can read this article again .
So, Let’s start from what issue we are facing
Issue : applications become unresponsive in between very frequently .
Analysis :
after debugging from jvm level stats from JMX bean dumping it was clear that GC collection time was increased so much in between
Heap Also increasing
After that we enabled gc log by using-Xlog:gc=debug:file=/tmp/gc.log in jvm arguments while starting application .
Analyzing gc.log , we found Full GC is triggering many times and whenever FullGC triggers , it generally stop the application for sometime , in java language we call it STW (Stop the World) .
Generally there are following type of events in G1GC :
Minor: Eden + Survivor From -> Survivor To
Mixed: Minor + (# reclaimable Tenured regions / -XX:G1MixedGCCountTarget) regions of Tenured
Full GC: All regions evacuated
Minor/Mixed + To-space exhaustion: Minor/Mixed + rollback + Full GC
In a smoothly running application, batches of Minor events alternate with batches of Mixed events should be there only . Full GC events and To-space exhaustion are things you absolutely don’t want to see when running G1GC, they need to be detected and eliminated and if they are running they should be run by some external events like (jstack,jmap etc …) .
For in depth details of these events , as already stated I will make a blog series on explaining G1GC concepts , for now you can search on the net .
Now, coming back to our debugging ,
We checked that no external command for taking thread dump or heap dump or histogram was made that can possibly initiate Full GC event .
So , the question now was why this full GC is Triggering .
On Further Researching we found that Humongous objects can be one of the reasons for triggering the Full GC event .
Now what is Humongous objects ? ? ?
A Brief Definition is : Any single data allocation ≥ G1HeapRegionSize/2 is considered a Humongousobject, which is allocated out of contiguous regions of Free space, which are then added to Tenured. As Humongous objects are allocated out of Free space. Allocation failures trigger GC events. If an allocation failure from Free space triggers GC, the GC event will be a Full GC, which is very undesirable in most circumstances. To avoid Full GC events in an application with lots of Humongous objects, one must ensure the Free space pool is large enough as compared to Eden that Eden will always fill up first .
So , We started checking if our application is generating Humongous objects .
And from gc.log we found that lots of Humongous objects are created which were the reasons for triggering Full GC events .
I made following commands to check the Humongous objects specially in linux :
It will give you the size of humongous objects generated in my application.
We have java less than Oracle JDK 8u45 version and forjava greater than this , it is written in release notes that these Humongous objects also get collected in Minor events also .
Search for “G1 now collects unreachable Humongous objects during young collections” in the
The new Content-Security-Policy HTTP response header helps you reduce XSS risks on modern browsers by declaring, which dynamic resources are allowed to load.
The core functionality of CSP can be divided into three areas:
Requiring that all scripts are safe and trusted by the application owner (ideally by making sure they match an unpredictable identifier specified in the policy called the CSP nonce),
Ensuring that page resources, such as images, stylesheets, or frames, are loaded from trusted sources,
Miscellaneous other security features: preventing the application from being framed by untrusted domains(frame-ancestors), transparently upgrading all resource requests to HTTPS, and others.
Now the major concerns here are as follows :
What should be the value of csp header to provide utmost security
What should be the value of csp header so that it will be applicable by all vapt vendors and also no major change required if my application is old .
What all things need to take care while deciding the header value , How Strict the header value should be .
How to write code (if someone is making a new application) so that it remains CSP Standards compatible .
In this Blog all the above doubts will be cleared .
Example of a Strict Value of Header which can be applied at Production Setups :
Let’s look at the properties of this policy as interpreted by a modern browser:
object-src ‘none’ Prevents fetching and executing plugin resources embedded using <object>, <embed> or <applet> tags. The most common example is Flash.
script-src nonce-{random} ‘unsafe-inline’ The nonce directive means that <script> elements will be allowed to execute only if they contain a nonce attribute matching the randomly-generated value which appears in the policy. Note: In the presence of a CSP nonce the unsafe-inline directive will be ignored by modern browsers. Older browsers, which don’t support nonces, will see unsafe-inline and allow inline scripts to execute.
script-src ‘strict-dynamic’ https: http: ‘strict-dynamic’ allows the execution of scripts dynamically added to the page, as long as they were loaded by a safe, already-trusted script (see the specification). Note: In the presence of ‘strict-dynamic’ the https: and http: whitelist entries will be ignored by modern browsers. Older browsers will allow the loading of scripts from any URL.
‘unsafe-eval’ allows the application to use the eval() JavaScript function. This reduces the protection against certain types of DOM-based XSS bugs, but makes it easier to adopt CSP. If your application doesn’t use eval(), you can remove this keyword and have a safer policy.
base-uri ‘none’ Disables <base> URIs, preventing attackers from changing the locations of scripts loaded from relative URLs. If your application uses <base> tags, base-uri ‘self’ is usually also safe.
frame-ancestorshttps://example.com – This means that your application page can be opened in iframe of application page served by example.com only .
Now , if you have decided some CSP header value and want to check if it is ok to use or not ,
I hope First two concerns listed above are cleared and now moving to next one
Not only setting the correct value makes your application safe , we need to make some changes to client side code also to make the application CSP compatible .
Code Changes
Random Nonce in Code
Above we talk about the random nonce which needs to be set on CSP header , but the question how security can be achieved by setting the random nonce in the header . the answer of this question is as follows :
We also need to set this same nonce in the parent script tag also and when the browser is requesting a page it checks nonce value from script tag and header and matches it and if it does not match then mark script as unsafe .
With ’strict-dynamic’, dynamically generated scripts implicitly inherit the nonce from the trusted script that created them. This way, already- executing, legitimate scripts can easily add new scripts to the DOM without extensive application changes. However, an attacker who finds an XSS bug, not knowing the correct nonce, is not able to abuse this functionality because they are prevented from executing scripts in the first place.
Refactor inline event handlers and javascript: URIs
Inline event handlers (onclick=”…”, onerror=”…”) and <a href=”javascript:…”> links can be used to run scripts, so an attacker who finds an XSS bug could inject such HTML and execute malicious JavaScript. CSP requires refactoring those patterns into safer alternatives.
In most cases the changes will be straightforward. To refactor event handlers, rewrite them to be added from a JavaScript block:
Code before CSP compatability
<script> function doThings() { ... } </script>
<span onclick="doThings();">A thing.</span>
Code after CSP compatability
<span id="things">A thing.</span>
<script nonce="${nonce}">
document.addEventListener('DOMContentLoaded', function () {
document.getElementById('things')
.addEventListener('click', function doThings() { ... });
});
</script>
For javascript: URIs, you can use a similar pattern:
If your application uses eval() to convert JSON string serializations into JS objects, you should refactor such instances to JSON.parse().
If you cannot remove all uses of eval() you can still set a CSP policy, but you will have to use the ‘unsafe-eval’ CSP keyword which will make your policy slightly less secure.
There are many frameworks in this time , where we do not write the html and js instead we write coe in java and framework convert it into js for example GWT . Now in these case the code generated should be CSP compatible is the responsibility of frameworks .
For more knowledge on CSP you can read the following research paper and for knowing what all other options can be added to csp header and browser support visit content-security-policy.com
So, Let’s make your application more safe by including CSP header in your application .
Please comment more suggestions if any related to CSP .
You can checkout the following video for explanation of this blog .
In Our previous blog we have seen mainly the filebeat and metric beat and explored the system module of that . In this blog we will see the usage of heart beat and how to monitor the services using heart beat .
Heartbeat should not be installed on each server you monitor , it should be installed on some separate servers from which you can monitor all url’s/services . For example we have one server deployed at x.x.x.x:8000 at some server in aws in north region , then we can install heart beat in our four server in each region(north,south,east,west) of aws and can monitor this server from all the servers to check weather services is UP from all india .
From these four server we can monitors all the services url’s .
For setting up the Heartbeat following is the link :
After that , On Dashboard tab you can see the MetricBeat monitoring ,
Also you can see the uptime app in kibana to check status and tls expiry time and history of all downtimes :
Followings are some screenshots:
Configuration in heartbeat.yml for setting the name of machine from where url is pinging in heartbeat.yml
processors:
- add_observer_metadata:
# Optional, but recommended geo settings for the location Heartbeat is running in
geo:
# Token describing this location
name: sahil-machine
# Lat, Lon "
#location: "37.926868, -78.024902"
Configuration in hearebeat.yml for setting to monitors urls :
heartbeat.config.monitors:
# Directory + glob pattern to search for configuration files
path: ${path.config}/monitors.d/*.yml
# If enabled, heartbeat will periodically check the config.monitors path for changes
reload.enabled: false
# How often to check for changes
reload.period: 5s
# Configure monitors inline
heartbeat.monitors:
- type: http
# Set enabled to true (or delete the following line) to enable this example monitor
enabled: false
# ID used to uniquely identify this monitor in elasticsearch even if the config changes
id: my-monitor
# Human readable display name for this service in Uptime UI and elsewhere
name: My Monitor
# List or urls to query
urls: ["http://localhost:9200"]
# Configure task schedule
schedule: '@every 10s'
# Total test connection and data exchange timeout
#timeout: 16s
# Name of corresponding APM service, if Elastic APM is in use for the monitored service.
#service.name: my-apm-service-name
- type: http
# Set enabled to true (or delete the following line) to enable this example monitor
enabled: true
# ID used to uniquely identify this monitor in elasticsearch even if the config changes
id: emerge-gurgaon
# Human readable display name for this service in Uptime UI and elsewhere
name: emerge-gurgaon
# List or urls to query
urls: ["https://app.ameyoemerge.in:8887/"]
# Configure task schedule
schedule: '@every 10s'
# Total test connection and data exchange timeout
#timeout: 16s
# Name of corresponding APM service, if Elastic APM is in use for the monitored service.
#service.name: my-apm-service-name
- type: http
# Set enabled to true (or delete the following line) to enable this example monitor
enabled: true
# ID used to uniquely identify this monitor in elasticsearch even if the config changes
id: emerge-banglore-app24
# Human readable display name for this service in Uptime UI and elsewhere
name: emerge-banglore-app24
# List or urls to query
urls: ["https://app24.ameyoemerge.in:8887/"]
# Configure task schedule
schedule: '@every 10s'
# Total test connection and data exchange timeout
#timeout: 16s
# Name of corresponding APM service, if Elastic APM is in use for the monitored service.
#service.name: my-apm-service-name
In the next blog we will explore Logstash with filebeat . Happy debugging . . .
As nowadays lots of our servers are deployed on Cloud and many applications are running on these servers , it is impossible to monitor and analyze logs by going to each servers . Central Logging and Monitoring solution is a must in present time .
In this Bog Series , we will learn about usage of Elastic Stack aka ELK .
Overview :
Elastic Stack is a group of open source products from Elastic designed to help users take data from any type of source and in any format and search, analyze, and visualize that data in real time. The product group was formerly known as ELK Stack, in which the letters in the name stood for the products in the group: ElasticSearch, Logstash and Kibana. A fourth product, Beats, was subsequently added to the stack, rendering the potential acronym unpronounceable. Elastic Stack can be deployed on premises or made available as Software as a Service
Architechture :
For a small-sized development environment, the classic architecture will look as follows :
Here there are many different types of beats you can read them from https://www.elastic.co/beats/ . Each beat have different set of usecases .
In this blog we will learn about two beats MetricBeat and FileBeat .
Note – LogStash is an options part in the architecture and should not be needed in most of the cases . Read more about Logstash at https://www.elastic.co/logstash/
Usage Elastic Stack :
I am running experiments on CentOS7 machine and using rpm to setup the elastic stack .
If you are getting output like above , it means elastic search is installed successfully .
Note : To change listen address and port you can change in the following file : /etc/elasticsearch/elasticsearch.yml
Kibana :
Kibana is the Front end tool which communicates to Elastic search where anyone can monitor and analyze logs .
Commands to install kibana :
curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-7.14.0-linux-x86_64.tar.gz
tar xzvf kibana-7.14.0-linux-x86_64.tar.gz
cd kibana-7.14.0-linux-x86_64/
./bin/kibana
Note : configure vim config/kibana.yml for port and ip addressed for listening settings .
Beats
These will be installed on all servers from where we want to collect information . they are like agents which will send data to Elastic Search .
Enabling Metric Beat :
Every Beats supports different modules , it is up to the use that which module one wnts to enable in each beats . if we talk about MetricBeat it has many modules like System,Postgres,Nginx and so on . In this Blog we will see usage of System Module of MetricBeat .
sudo metricbeat modules enable system
sudo metricbeat setup -e
sudo service metricbeat start
Here we are only enabling system module of metri beats , there are many odule for basic monitoring of aplications like postgresql , nginx , tomcat etc .
Fo list of modules available in metric beats : command is
metricbeat modules list
Yeipeee Now we can Monitor System Data in kibana as follows .
Open [Metricbeat System] Host overview ECS in Dashboards in kibana UI . There you can apply filter of host of which one wants to see data .
System Module MetricBeat Uses : What analysis can be Done by System module of MetricBeat :
Traditionally after accessing linux servers , we gather system information by using many different commands and tools which also takes time , specially when there is some running issue on production .
Following is the list of information :
Size information of all partitions
Read/Write Performance of Hardisk
InboundOutBound Traffic analysis per Ethernet Port
Load Avergae analysis of system
Top Proesses consuming High CPU and RAM
All these type of information now can be seen in seconds for some particular host using kibana UI .
Following are some screenshots :
Enabling FileBeat
Whether you’re collecting from security devices, cloud, containers, hosts, or OT, Filebeat helps you keep the simple things simple by offering a lightweight way to forward and centralize logs and files.
Note : For configuring filebeat that where to send data to elastic search or filebeat configue in /etc/filebeat/filebeat.yml , cureent as i have only one machine so no need to do an conf as defaut conf will work for me You can check the following lion : https://www.elastic.co/guide/en/beats/filebeat/7.14/configuring-howto-filebeat.html
enabling system logs module in filebeat :
filebeat modules enable system
(for system logs if we want to set custom paths : edit the file /etc/filebeat/modules.d/system.yml) -- Generally no need to change these config in all cases
filebeat setup -e
sudo service filebeat start
Like Metric Beat , FileBeats also have list of modules like postgres,nginx , and it also supports logging of popular framework like spring and can collect logs of these applications and provides ways to analyze them easily .
To check modules list available for filebeat use following command :
[root@localhost elk]# filebeat modules list | less
System Module Filebeat Uses :
Now you can use Kibana UI to analyze system logs like messages etc .
Open [Filebeat System] Syslog dashboard ECS in Dashboard Tab in Kibana .
Following are some screen shots which one can see :
Configure filebeat for custom log files :
Now we may have situation where none of Modules and integration with framework logging work in filebeat for our custom application log then in that case you can configure your input manually to configure path of logs to read and analayse them in logs and stream section in kibana UI
Here you can search in logs by hostname , filepath and can also search in whole message which is fetched .
By default only message column is shown . One can configure then in settings tabs of logs tabs in kibana .
Following are some screenshot :
By Default logs lines are only one column , if for advance debugging we want to break log tine into columns then we need to use Logstash with Grok Filter .
In next blog we will see the usage of LogStash to break custom logs into columns for better understanding .
In our Daily debugging we need to analyze logs files of various products . Reading those log files are not an easy task , it requires special debugging skills which can only be gained through experience or by god’s grace . Now while debugging we might need to extract some of data or we need to play with a log file which can not be done by just reading , there is need for commands .
There are many commands in linux which are used by debuggers like grep,awk,sed,wc,taskset,ps,sort,uniq,cut,xargs etc . . .
In this blog we will see usage of Practical grep commands examples useful in real world debugging in Linux . The examples which we will see in this blog are super basic but very useful in real life which a beginner should read to enhance the debugging skills .
Let’s Go to the Practical Part
Grep the lines which contains some particular word
[root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]# grep 'sahil' file1.log
i am sahil
sahil is a software engineer
Grep number of lines matched for a particualar word in a file
[root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]# grep -c 'sahil' file1.log
2
Another way :
[root@localhost playground]# grep 'sahil' file1.log | wc -l
2
Grep all the lines in which contains some word in a file with case insensitive
[root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]# grep -i 'sahil' file1.log
i am sahil
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]#
Grep the lines in which either of two words are present in a file
[root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]# grep 'sahil\|software' file1.log
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]#
Grep lines in which two words are present
[root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]# grep 'sahil' file1.log | grep 'software'
sahil is a software engineer
[root@localhost playground]# ^C
[root@localhost playground]#
Eliminate lines which contains some word
[root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]# grep -v 'sahil' file1.log
hello
i am software engineer
Sahil is a software engineer
Eliminate case insensitively
[root@localhost playground]# grep -iv 'sahil' file1.log
hello
i am software engineer
[root@localhost playground]#
Matching the lines that start with a string
[root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]# grep '^sahil' file1.log
sahil is a software engineer
Matching the lines that end with a string
[root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]# grep 'engineer$' file1.log
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]#
Getting n number of lines after each match
[root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]#
[root@localhost playground]# grep 'hello' file1.log
hello
[root@localhost playground]# grep -A 1 'hello' file1.log
hello
i am sahil
[root@localhost playground]# grep -A 2 'hello' file1.log
hello
i am sahil
i am software engineer
Geeting n number of lines before each match
[root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]# grep 'i am sahil' file1.log
i am sahil
[root@localhost playground]# grep -B 1 'i am sahil' file1.log
hello
i am sahil
[root@localhost playground]# grep -B 2 'i am sahil' file1.log
hello
i am sahil
[root@localhost playground]#
Grep n lines after and m lines before every match
[root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]# grep -A 2 -B 1 'i am sahil' file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
[root@localhost playground]#
Grep some word in more than one file in current directory
[root@localhost playground]# cat file1.log
hello
i am sahil
i am software engineer
Sahil is a software engineer
sahil is a software engineer
[root@localhost playground]# cat file2.log
hello
i am sahil
i am tech blogger
Sahil is a tech blogger
sahil is a tech blogger
[root@localhost playground]# grep 'sahil' file1.log file2.log
file1.log:i am sahil
file1.log:sahil is a software engineer
file2.log:i am sahil
file2.log:sahil is a tech blogger
Grep some word in all files in current directory
[root@localhost playground]# grep 'sahil' *
file1.log:i am sahil
file1.log:sahil is a software engineer
file2.log:i am sahil
file2.log:sahil is a tech blogger
[root@localhost playground]#
[root@localhost playground]# cat file3.log
time taken by api is 1211 ms
time taken by api is 2000 ms
time taken by api is 3000 ms
time taken by api is 4000 ms
time taken by api is 50000 ms
time taken by api is 123 ms
time taken by api is 213 ms
time taken by api is 456 ms
time taken by api is 1000 ms
Now suppose we want to grep all the lines in which time taken by any api is more than 1 second or more than 1000 ms , it means it should have minimum 4 digit number .
Now grep command for this will be as follows :
[root@localhost playground]# grep -P '[0-9]{4} ms' file3.log
time taken by api is 1211 ms
time taken by api is 2000 ms
time taken by api is 3000 ms
time taken by api is 4000 ms
time taken by api is 50000 ms
time taken by api is 1000 ms
If want to get 5 digit number
[root@localhost playground]# grep -P '[0-9]{5} ms' file3.log
time taken by api is 50000 ms
Recursively grep in a directory and sub directoies
[root@localhost playground]# grep -R 'sahil' .
./dir1/file.log:i am sahil
./dir1/file.log:sahil is a software engineer
./file1.log:i am sahil
./file1.log:sahil is a software engineer
./file2.log:i am sahil
./file2.log:sahil is a tech blogger
[root@localhost playground]#
All above are basic use cases of grep . One can mix all the command options of grep to achieve the complex use cases and also one can also mix different grep commands using pipe operator to achieve complex use cases .
In future blogs i will explain some complex use cases and example how to achieve that using linux commands which can ease logs debugging .
The CISA Vulnerability Bulletin provides a summary of new vulnerabilities that have been recorded by the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) in the past week. NVD is sponsored by CISA. In some cases, the vulnerabilities in the bulletin may not yet have assigned CVSS scores. Please visit NVD for updated vulnerability entries, which include CVSS scores once they are available.
Vulnerabilities are based on the Common Vulnerabilities and Exposures (CVE) vulnerability naming standard and are organized according to severity, determined by the Common Vulnerability Scoring System (CVSS) standard. The division of high, medium, and low severities correspond to the following scores:
High: vulnerabilities with a CVSS base score of 7.0–10.0
Medium: vulnerabilities with a CVSS base score of 4.0–6.9
Low: vulnerabilities with a CVSS base score of 0.0–3.9
Entries may include additional information provided by organizations and efforts sponsored by CISA. This information may include identifying information, values, definitions, and related links. Patch information is provided when available. Please note that some of the information in the bulletin is compiled from external, open-source reports and is not a direct result of CISA analysis.
In this Blog , i am writing about High vulnerabilities only and some of Medium and Low if they it feels important to me .
For list of all vulnerabilities you can check CISA Bulletin .
High Vulnerabilities
Primary Vendor — Product
Description
Published
CVSS Score
Source & Patch Info
apache — nuttx
Apache Nuttx Versions prior to 10.1.0 are vulnerable to integer wrap-around in functions malloc, realloc and memalign. This improper memory assignment can lead to arbitrary memory allocation, resulting in unexpected behavior such as a crash or a remote code injection/execution.
The Autoptimize WordPress plugin before 2.7.8 attempts to delete malicious files (such as .php) form the uploaded archive via the “Import Settings” feature, after its extraction. However, the extracted folders are not checked and it is possible to upload a zip which contained a directory with PHP file in it and then it is not removed from the disk. It is a bypass of CVE-2020-24948 which allows sending a PHP file via the “Import Settings” functionality to achieve Remote Code Execution.
In the Location Manager WordPress plugin before 2.1.0.10, the AJAX action gd_popular_location_list did not properly sanitise or validate some of its POST parameters, which are then used in a SQL statement, leading to unauthenticated SQL Injection issues.
An issue was discovered in Cleo LexiCom 5.5.0.0. Within the AS2 message, the sender can specify a filename. This filename can include path-traversal characters, allowing the file to be written to an arbitrary location on disk.
Contiki-NG is an open-source, cross-platform operating system for internet of things devices. A buffer overflow vulnerability exists in Contiki-NG versions prior to 4.6. After establishing a TCP socket using the tcp-socket library, it is possible for the remote end to send a packet with a data offset that is unvalidated. The problem has been patched in Contiki-NG 4.6. Users can apply the patch for this vulnerability out-of-band as a workaround.
Contiki-NG is an open-source, cross-platform operating system for internet of things devices. It is possible to cause an out-of-bounds write in versions of Contiki-NG prior to 4.6 when transmitting a 6LoWPAN packet with a chain of extension headers. Unfortunately, the written header is not checked to be within the available space, thereby making it possible to write outside the buffer. The problem has been patched in Contiki-NG 4.6. Users can apply the patch for this vulnerability out-of-band as a workaround.
Contiki-NG is an open-source, cross-platform operating system for internet of things devices. In verions prior to 4.6, an attacker can perform a denial-of-service attack by triggering an infinite loop in the processing of IPv6 neighbor solicitation (NS) messages. This type of attack can effectively shut down the operation of the system because of the cooperative scheduling used for the main parts of Contiki-NG and its communication stack. The problem has been patched in Contiki-NG 4.6. Users can apply the patch for this vulnerability out-of-band as a workaround.
Contiki-NG is an open-source, cross-platform operating system for internet of things devices. In versions prior to 4.5, buffer overflow can be triggered by an input packet when using either of Contiki-NG’s two RPL implementations in source-routing mode. The problem has been patched in Contiki-NG 4.5. Users can apply the patch for this vulnerability out-of-band as a workaround.
In updateDrawable of StatusBarIconView.java, there is a possible permission bypass due to an uncaught exception. This could lead to local escalation of privilege by running foreground services without notifying the user, with User execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-10 Android-11 Android-8.1 Android-9Android ID: A-169255797
In handle_rc_metamsg_cmd of btif_rc.cc, there is a possible out of bounds write due to a missing bounds check. This could lead to remote code execution over Bluetooth with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11 Android-8.1 Android-9 Android-10Android ID: A-181860042
In the Settings app, there is a possible way to disable an always-on VPN due to a missing permission check. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11Android ID: A-179975048
In p2p_process_prov_disc_req of p2p_pd.c, there is a possible out of bounds read and write due to a use after free. This could lead to remote escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11 Android-8.1 Android-9 Android-10Android ID: A-181660448
Joomla! Core is prone to a security bypass vulnerability. Exploiting this issue may allow attackers to perform otherwise restricted actions and subsequently retrieve password reset tokens from the database through an already existing SQL injection vector. Joomla! Core versions 1.5.x ranging from 1.5.0 and up to and including 1.5.15 are vulnerable.
Joomla! Core is prone to a vulnerability that lets attackers upload arbitrary files because the application fails to properly verify user-supplied input. An attacker can exploit this vulnerability to upload arbitrary code and run it in the context of the webserver process. This may facilitate unauthorized access or privilege escalation; other attacks are also possible. Joomla! Core versions 1.5.x ranging from 1.5.0 and up to and including 1.5.15 are vulnerable.
Secure 8 (Evalos) does not validate user input data correctly, allowing a remote attacker to perform a Blind SQL Injection. An attacker could exploit this vulnerability in order to extract information of users and administrator accounts stored in the database.
The Fancy Product Designer WordPress plugin before 4.6.9 allows unauthenticated attackers to upload arbitrary files, resulting in remote code execution.
SerenityOS before commit 3844e8569689dd476064a0759d704bc64fb3ca2c contains a directory traversal vulnerability in tar/unzip that may lead to command execution or privilege escalation.
White Shark System (WSS) 1.3.2 has an unauthorized access vulnerability in default_user_edit.php, remote attackers can exploit this vulnerability to escalate to admin privileges.
Hibernate is the most popular orm framework used to interact with databases in java . In this article we will see what are the various ways using which bulk selection and updation in any table can be done and what is the most effective way when using the hibernate framework in java .
I experimented with three ways which are as follows :
Using Hibernate’s Query.list() method.
Using ScrollableResults with FORWARD_ONLY scroll mode.
Using ScrollableResults with FORWARD_ONLY scroll mode in a StatelessSession.
To decide which one gives best performance for our use case, following tests i performed using the above three ways listed.
Select and update 1000 rows.
Let’s see the Code and results by applying above three ways to the operation stated above one by one.
Using Hibernate’s Query.list() method.
Code Executed :
List rows;
Session session = getSession();
Transaction transaction = session.beginTransaction();
try {
Query query = session.createQuery("FROM PersonEntity WHERE id > :maxId ORDER BY id").setParameter("maxId",
MAX_ID_VALUE);
query.setMaxResults(1000);
rows = query.list();
int count = 0;
for (Object row : rows) {
PersonEntity personEntity = (PersonEntity) row;
personEntity.setName(randomAlphaNumeric(30));
session.saveOrUpdate(personEntity);
//Always flush and clear the session after updating 50(jdbc_batch_size specified in hibernate.properties) rows
if (++count % 50 == 0) {
session.flush();
session.clear();
}
}
} finally {
if (session != null && session.isOpen()) {
transaction.commit();
session.close();
}
}
Tests Results :
Time taken:- 360s to 400s
Heap Pattern:- gradually increased from 13m to 51m(from jconsole).
Using ScrollableResults with FORWARD_ONLY scroll mode.
With this we are expecting that it should consume less memory that the 1st approach . Let’s see the results
Code Executed :
Session session = getSession();
Transaction transaction = session.beginTransaction();
ScrollableResults scrollableResults = session
.createQuery("FROM PersonEntity WHERE id > " + MAX_ID_VALUE + " ORDER BY id")
.setMaxResults(1000).scroll(ScrollMode.FORWARD_ONLY);
int count = 0;
try {
while (scrollableResults.next()) {
PersonEntity personEntity = (PersonEntity) scrollableResults.get(0);
personEntity.setName(randomAlphaNumeric(30));
session.saveOrUpdate(personEntity);
if (++count % 50 == 0) {
session.flush();
session.clear();
}
}
} finally {
if (session != null && session.isOpen()) {
transaction.commit();
session.close();
}
}
Tests Results :
Time taken:- 185s to 200s
Heap Pattern:- gradually increased from 13mb to 41mb (measured same using jconsole)
Using ScrollableResults with FORWARD_ONLY scroll mode in a StatelessSession.
A stateless session does not implement a first-level cache nor interact with any second-level cache, nor does it implement transactional write-behind or automatic dirty checking, nor do operations cascade to associated instances. Collections are ignored by a stateless session. Operations performed via a stateless session bypass Hibernate’s event model and interceptors.
These type of session is always recommended in case of bulk updation as we really do not need these overheads of hibernate features in these type of usecases .
Code Executed :
StatelessSession session = getStatelessSession();
Transaction transaction = session.beginTransaction();
ScrollableResults scrollableResults = session
.createQuery("FROM PersonEntity WHERE id > " + MAX_ID_VALUE + " ORDER BY id")
.setMaxResults(TRANSACTION_BATCH_SIZE).scroll(ScrollMode.FORWARD_ONLY);
try {
while (scrollableResults.next()) {
PersonEntity personEntity = (PersonEntity) scrollableResults.get(0);
personEntity.setName(randomAlphaNumeric(20));
session.update(personEntity);
}
} finally {
if (session != null && session.isOpen()) {
transaction.commit();
session.close();
}
}
Tests Results :
Time taken:- 185s to 200s
Heap Pattern:- gradually increased from 13mb to 39mb
I also performed the same tests with 2000 rows and the results obtained were as follows:-
Results:-
Using list():- time taken:- approx 750s, heap pattern:- gradually increased from 13mb to 74 mb
Using ScrollableResultSet:- time taken:- approx 380s, heap pattern:- gradually increased from 13mb to 46mb
Using Stateless:- time taken:- approx 380s, heap pattern:- gradually increased from 13mb to 43mb
Blocker Problem with all above approaches Tried
ScrollableResults and Stateless ScrollableResults give almost the same performance which is much better than Query.list(). But there is still one problem with all the above approaches. Locking, all the above approaches select and update the data in same transaction, this means for as long as the transaction is running, the rows on which updates have been performed will be locked and any other operations will have to wait for the transaction to finish.
Solution :
There are two things which we should do here to solve above problem :
we need to select and update data in different transactions.
And updation of these types should be done in Batches
So again I performed the same tests as above but this time update was performed in a different transaction which was commited in batches of 50.
Note:- In case of Scrollable and Stateless we need a different session also, as we need the original session and transaction to scroll through the results.
Results using Batch Processing
Using list():- time taken:- approx 400s, heap pattern:- gradually increased from 13mb to 61 mb
Using ScrollableResultSet:- time taken:- approx 380s, heap pattern:- gradually increased from 13mb to 51mb
Using Stateless:- time taken:- approx 190s, heap pattern:- gradually increased from 13mb to 44mb
Observation:- This temporal performance of ScrollableResults dropped down to become almost equal to Query.list(), but performance of Stateless remained almost same.
Summary and Conclusion
As from all the above experimentation , in cases where we need to do bulk selection and updation, the best approach in terms of memory consumption and time is as follows :
Use ScrollableResults in a Stateless Session.
Perform selection and updation in different transactions in batches of 20 to 50 (Batch Processing) (Note -*- Batch size can depend on the case to case basis)
Sample Code with the best approach
StatelessSession session = getStatelessSession();
Transaction transaction = session.beginTransaction();
ScrollableResults scrollableResults = session
.createQuery("FROM PersonEntity WHERE id > " + MAX_ID_VALUE + " ORDER BY id")
.setMaxResults(TRANSACTION_BATCH_SIZE).scroll(ScrollMode.FORWARD_ONLY);
int count = 0;
try {
StatelessSession updateSession = getStatelessSession();
Transaction updateTransaction = updateSession.beginTransaction();
while (scrollableResults.next()) {
PersonEntity personEntity = (PersonEntity) scrollableResults.get(0);
personEntity.setName(randomAlphaNumeric(5));
updateSession.update(personEntity);
if (++count % 50 == 0) {
updateTransaction.commit();
updateTransaction = updateSession.beginTransaction();
}
}
updateSession.close();
} finally {
if (session != null && session.isOpen()) {
transaction.commit();
session.close();
}
}
With the java frameworks like spring and others this code may be even more smaller , like one not needing to take care of session closing etc . Above code is written in plain java using hibernate.
Please try with large data and comment us the results , Also if you have some other better approach to do this please comment .
The CISA Vulnerability Bulletin provides a summary of new vulnerabilities that have been recorded by the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) in the past week. NVD is sponsored by CISA. In some cases, the vulnerabilities in the bulletin may not yet have assigned CVSS scores. Please visit NVD for updated vulnerability entries, which include CVSS scores once they are available.
Vulnerabilities are based on the Common Vulnerabilities and Exposures (CVE) vulnerability naming standard and are organized according to severity, determined by the Common Vulnerability Scoring System (CVSS) standard. The division of high, medium, and low severities correspond to the following scores:
High: vulnerabilities with a CVSS base score of 7.0–10.0
Medium: vulnerabilities with a CVSS base score of 4.0–6.9
Low: vulnerabilities with a CVSS base score of 0.0–3.9
In this Blog , i am writing about High vulnerabilities only and some of Medium and Low if they it feels important to me .
For list of all vulnerabilities you can check CISA Bulletin .
High Vulnerabilities
Primary Vendor — Product
Description
Published
CVSS Score
Source & Patch Info
bloofox — bloofoxcms
bloofoxCMS 0.5.2.1 is infected with Unrestricted File Upload that allows attackers to upload malicious files (ex: php files).
In avrc_msg_cback of avrc_api.cc, there is a possible out of bounds write due to a heap buffer overflow. This could lead to remote code execution with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11 Android-8.1 Android-9 Android-10Android ID: A-177611958
In memory management driver, there is a possible out of bounds write due to a missing bounds check. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android SoCAndroid ID: A-183464866
In memory management driver, there is a possible memory corruption due to a double free. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android SoCAndroid ID: A-183461321
In memory management driver, there is a possible memory corruption due to a use after free. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android SoCAndroid ID: A-183461320
In memory management driver, there is a possible memory corruption due to a use after free. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android SoCAndroid ID: A-183467912
In memory management driver, there is a possible out of bounds write due to uninitialized data. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android SoCAndroid ID: A-183459083
In memory management driver, there is a possible out of bounds write due to an integer overflow. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android SoCAndroid ID: A-183461318
In memory management driver, there is a possible out of bounds write due to a missing bounds check. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android SoCAndroid ID: A-183461317
In memory management driver, there is a possible out of bounds write due to a missing bounds check. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android SoCAndroid ID: A-183459078
In memory management driver, there is a possible escalation of privilege due to a missing permission check. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android SoCAndroid ID: A-183461315
In memory management driver, there is a possible out of bounds write due to a missing bounds check. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android SoCAndroid ID: A-183464868
In onCreate of CalendarDebugActivity.java, there is a possible way to export calendar data to the sdcard without user consent due to a tapjacking/overlay attack. This could lead to local escalation of privilege with User execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11Android ID: A-174046397
In onActivityResult of EditUserPhotoController.java, there is a possible access of unauthorized files due to an unexpected URI handler. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-8.1 Android-9 Android-10 Android-11Android ID: A-172939189
In getMinimalSize of PipBoundsAlgorithm.java, there is a possible bypass of restrictions on background processes due to a permissions bypass. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11Android ID: A-174302616
In notifyScreenshotError of ScreenshotNotificationsController.java, there is a possible permission bypass due to an unsafe PendingIntent. This could lead to local escalation of privilege with User execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-10 Android-11 Android-8.1 Android-9Android ID: A-178189250
An improper input validation vulnerability in sflacfd_get_frm() in libsflacextractor library prior to SMR MAY-2021 Release 1 allows attackers to execute arbitrary code on mediaextractor process.
An improper input validation vulnerability in sdfffd_parse_chunk_FVER() in libsdffextractor library prior to SMR MAY-2021 Release 1 allows attackers to execute arbitrary code on mediaextractor process.
An improper input validation vulnerability in sdfffd_parse_chunk_PROP() in libsdffextractor library prior to SMR MAY-2021 Release 1 allows attackers to execute arbitrary code on mediaextractor process.
An improper input validation vulnerability in sdfffd_parse_chunk_PROP() with Sample Rate Chunk in libsdffextractor library prior to SMR MAY-2021 Release 1 allows attackers to execute arbitrary code on mediaextractor process.
An improper input validation vulnerability in scmn_mfal_read() in libsapeextractor library prior to SMR MAY-2021 Release 1 allows attackers to execute arbitrary code on mediaextractor process.
In on_l2cap_data_ind of btif_sock_l2cap.cc, there is possible memory corruption due to a use after free. This could lead to remote code execution over Bluetooth with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-11 Android-10Android ID: A-175686168
In rw_t3t_process_error of rw_t3t.cc, there is a possible double free due to uninitialized data. This could lead to remote code execution over NFC with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-9 Android-10 Android-11 Android-8.1Android ID: A-179687208
An improper access control vulnerability in genericssoservice prior to SMR JUN-2021 Release 1 allows local attackers to execute protected activity with system privilege via untrusted applications.
The CISA Vulnerability Bulletin provides a summary of new vulnerabilities that have been recorded by the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) in the past week. NVD is sponsored by CISA. In some cases, the vulnerabilities in the bulletin may not yet have assigned CVSS scores. Please visit NVD for updated vulnerability entries, which include CVSS scores once they are available.
Vulnerabilities are based on the Common Vulnerabilities and Exposures (CVE) vulnerability naming standard and are organized according to severity, determined by the Common Vulnerability Scoring System (CVSS) standard. The division of high, medium, and low severities correspond to the following scores:
High: vulnerabilities with a CVSS base score of 7.0–10.0
Medium: vulnerabilities with a CVSS base score of 4.0–6.9
Low: vulnerabilities with a CVSS base score of 0.0–3.9
In this Blog , i am writing about High vulnerabilities only and some of Medium and Low if they it feels important to me .
For list of all vulnerabilities you can check CISA Bulletin .
High Vulnerabilities
Product
Description
Published
CVSS Score
Source & Patch Info
linux — linux_kernel
The eBPF RINGBUF bpf_ringbuf_reserve() function in the Linux kernel did not check that the allocated size was smaller than the ringbuf size, allowing an attacker to perform out-of-bounds writes within the kernel and therefore, arbitrary code execution. This issue was fixed via commit 4b81ccebaeee (“bpf, ringbuf: Deny reserve of buffers larger than ringbuf”) (v5.13-rc4) and backported to the stable kernels in v5.12.4, v5.11.21, and v5.10.37. It was introduced via 457f44363a88 (“bpf: Implement BPF ring buffer and verifier support for it”) (v5.8-rc1).
The eBPF ALU32 bounds tracking for bitwise ops (AND, OR and XOR) in the Linux kernel did not properly update 32-bit bounds, which could be turned into out of bounds reads and writes in the Linux kernel and therefore, arbitrary code execution. This issue was fixed via commit 049c4e13714e (“bpf: Fix alu32 const subreg bound tracking on bitwise operations”) (v5.13-rc4) and backported to the stable kernels in v5.12.4, v5.11.21, and v5.10.37. The AND/OR issues were introduced by commit 3f50f132d840 (“bpf: Verifier, do explicit ALU32 bounds tracking”) (5.7-rc1) and the XOR variant was introduced by 2921c90d4718 (“bpf:Fix a verifier failure with xor”) ( 5.10-rc1).
The io_uring subsystem in the Linux kernel allowed the MAX_RW_COUNT limit to be bypassed in the PROVIDE_BUFFERS operation, which led to negative values being usedin mem_rw when reading /proc/<PID>/mem. This could be used to create a heap overflow leading to arbitrary code execution in the kernel. It was addressed via commit d1f82808877b (“io_uring: truncate lengths larger than MAX_RW_COUNT on provide buffers”) (v5.13-rc1) and backported to the stable kernels in v5.12.4, v5.11.21, and v5.10.37. It was introduced in ddf0322db79c (“io_uring: add IORING_OP_PROVIDE_BUFFERS”) (v5.7-rc1).
Out of bound read will happen if EAPOL Key length is less than expected while processing NAN shared key descriptor attribute in Snapdragon Auto, Snapdragon Compute, Snapdragon Connectivity, Snapdragon Consumer Electronics Connectivity, Snapdragon Consumer IOT, Snapdragon Industrial IOT, Snapdragon IoT, Snapdragon Mobile, Snapdragon Voice & Music, Snapdragon Wired Infrastructure and Networking
OpenVPN Access Server 2.7.3 to 2.8.7 allows remote attackers to trigger an assert during the user authentication phase via incorrect authentication token data in an early phase of the user authentication resulting in a denial of service.