There’s even a ton of “optimization scripts” out there, all-in-one flashable .zips that promise to significantly increase performance, battery life, and other things. Some of the tweaks can actually work, but a majority are simply a placebo effect, or worse, actually have a negative impact on your device. That isn’t to say people are releasing nefarious scripts intentionally – there are definitely bogus paid apps on the Play Store, but optimization scripts released on Android forums are generally well-intentioned, it just so happens that the developer may be mis-informed, or simply experimenting with various optimization tweaks. Unfortunately, a sort of snowball effect tends to occur, especially in “all-in-one” optimization scripts. A small handful of the tweaks may actually do something, while another set of tweaks in a script may do absolutely nothing at all – yet these scripts get passed down as being magic bullets, without any real investigation into what works, and what doesn’t. Thus, a lot of all-in-one optimization scripts are using the same methods, some of which are completely outdated or harmful in the long run. In summary, a majority of “all-in-one” optimization scripts are nothing but slapped-together recommended tunings, with no clear idea of how or why these optimizations “work – users then flash the scripts, and claim their performance is suddenly faster (when in fact, it was most likely the very simple act of rebooting their device that caused a performance increase, as everything in the device’s RAM gets cleaned out). In this Appuals exclusive article, we’ll be highlighting some of the most common recommendations for “optimizing” Android performance, and whether they’re simply a myth, or a legitimate tweak for device performance.

Swap

At the top of the myth list is the Android swap – which is pretty absurd in terms of being thought of as an Android optimization. Swaps main purpose is to create and connect the paging file, which will free storage space in memory. This sounds sensible on paper, but its really applicable to a server, which has almost no interactivity. When you use your Android phone’s swap regularly, it will lead to severe lags that stem from things slipping past the cache. Imagine, for example, if an application tries to display a graphic, which is stored in the swap, which now has to re-load the disc after freeing up space by placing data swap with another application. It’s really messy. Some optimization enthusiasts can say that swap offered no problems, but it isn’t swap making the performance boosts – it’s the built-in Android mechanism lowmemorykiller, which will regularly kill bloated, high-priority processes that are not being used. LMK was designed specifically for handling low-memory conditions, is invoked from the kswapd process, and generally kills user space processes. This is different from OOMkiller (out-of-memory killer), but that’s a different topic altogether. The point is, a device with, for example, 1GB of RAM can never reach the necessary performance data in a swap, and so swap is absolutely not needed in Android. Its implementation is simply fraught with lag and leads to a degradation in performance, rather than optimizing it.

zRAM – Outdated and No Longer Efficient

zRAM is a proven and effective method for device optimization, for older devices – think KitKat-based devices that are operating on only about 512 MB of RAM. The fact that some people still include zRAM tweaks in optimization scripts, or recommend zRAM as some kind of modern optimization tweak, is an example of people generally not following the latest operational protocols. zRAM was intended for entry-level budget-range multi-core SoCs, such as devices utilizing MTK chipsets and 512 MB of RAM. Very cheap Chinese phones, basically. What zRAM basically does is separate the kernel via the encryption stream. When zRAM is used on older devices with a single core, even if zRAM is recommended on such devices, large quantities of lags tend to crop up. This also happens with the KSM technology (Kernel Same Page Merging) which combines identical memory pages in a bid to free space. This is in fact recommended by Google, but leads to greater lags on older devices, because the constantly active core theads are running continuously from memory to search for duplicate pages. Basically, trying to run the optimization tweak slows down the device even further, ironically.

Seeder – Outdated Since Android 3.0

One of the most debated optimization tips amongst Android devs is seeder, and we’re sure someone could try to prove us wrong on this topic – but first we need to examine the history of seeder. Yes, there is a large number of reports that declare better Android performance after installation on much older Android devices. However, people for whatever reason believe this means it is also an applicable optimization for modern Android devices, which is absolutely absurd. The fact that Seeder is still maintained and offered as a “modern” lag reduction tool is an example of misinformation – though this is not the fault of Seeder’s developer, as even their Play Store page notes that Seeder is less effective after Android 4.0+. Yet for whatever reason, Seeder still pops up in optimization discussions for modern Android systems. What Seeder basically does for Android 3.0 is address a bug where Android runtime would actively use the /dev/random/ file to acquire entropy. The /dev/random/ buffer would become unstable, and the system would be blocked until it filled the required amount of data – think little things like the various sensors and buttons on the Android device. Seeder’s author took the Linux-demon rngd, and compiled for Android’s inastroil so that it took random data from a much faster and more predictable /dev/urandom pathway, and merges them into dev/random/ every second, without allowing /dev/random/ to become exhausted. This resulted in an Android system that did not experience a lack of entropy, and performed much smoother. Google smashed this bug after Android 3.0, yet for some reason, Seeder still pops up on “recommended tweaks” lists for Android performance optimization. Furthermore, the Seeder app has a few analogues like sEFix which include Seeder’s functionality, whether using the same rngd or the alternative haveged, or even just a symlink between /dev/urandom and /dev/random. This is absolutely pointless for modern Android systems. The reason its pointless is because newer Android versions use /dev/random/ in three main components – libcrypto, for encryption of SSL connections, generating SSH keys, etc. WPA_supplication/hostapd which generates WEP/WPA keys, and finally, a handful of libraries for generating ID in creating EXT2/EXT3/EXT4 file systems. So when Seeder or Seeder-based enhancements are included in modern Android optimization scripts, what ends up happening is a degradation in device performance, because rngd will constantly awaken the device and cause an increase in CPU frequency, which of course, negatively effects battery consumption.

Odex

The stock firmware on Android devices pretty much always odex. This means that alongside the standard package for Android apps in APK format, found in /system/app/ and /system/priv-app/, are of the same file names with the .odex extension. The odex files contain optimized bytecode applications which have already passed through the validator and optimizer virtual machine, then recorded in a separate file utilizing something like dexopt tool. So odex files are meant to offload virtual machine and offer a sped up launching of the odexed application – on the downside, ODEX files prevent modifications to the firmware, and create problems with updates, so for this reason many custom ROMs like LineageOS are distributed without ODEX. Generating ODEX files is done in a number of ways, like using Odexer Tool – the problem is that its purely a placebo effect. When modern Android system does not find odex files in the /system directory, the system will actually create them and place them in the /system/dalvik-cache/ directory. This is exactly what is happening when, for example, you flash a new Android version and it gives the message “Busy, Optimizing Applications” for a while.

Lowmemorykiller tweaks

Multitasking in Android differs from other mobile operating systems in the sense that its based on a classical model where applications work quietly in the background, and there are no restrictions on the number of background apps (unless one is set in Developer Options, but this is generally recommended against) – furthermore, the functionality of transition to a background execution is not stopped, although the system reserves the right to kill background apps in low memory situations (see where we talked about lowmemorykiller and out-of-memory killer earlier in this guide). To go back to the lowmemorykiller mechanism, Android can continue to operate with a limited amount of memory and a lack of swap-partition. The user can continue to launch applications and switch between them, and the system will silently kill un-used background apps to try and free up memory for active tasks. This was highly useful for Android in the early days, though for some reason its become popular in the form of task-killer apps, which are generally more harmful than beneficial. Task-killer apps either wake up at set intervals, or are ran by the user, and appear to free up large amounts of RAM, which is seen as a positive – more free RAM means a faster device, right? This isn’t exactly the case with Android, however. In fact, having a large amount of free RAM can actually be harmful to your device’s performance and battery life. When apps are stored in Android’s RAM, its much easier to call them up, launch them, etc. The Android system doesn’t need to devote much resources to switching to the app, because its already there in the memory. Because of this, task-killers aren’t really as popular as they once were, though Android novices still tend to rely on them for some reason (lack of information, sadly). Unfortunately, a new trend has replaced task-killers, the trend of lowmemorykiller mechanism tunings. This would be for example MinFreeManager app, and the main idea is to increase the RAM overhead before the system starts killing background apps. So for example, the standard RAM operates at borders – 4, 8, 12, 24, 32, and 40 Mb, and when the free storage space of 40 MB is filled, one of the cached apps that is loaded into memory but not running will be terminated. So basically, Android will always have at least 40 MB of available memory, which is enough to accommodate one more application before lowmemorykiller begins its cleanup process – which means Android will always do its best to use the maximum amount of available RAM without interfering with the user experience. Sadly, what some homebrew enthusiasts started recommended is that the value be raised to, example, 100 MB before LMK kicks in. Now the user will actually lose RAM (100 – 40 = 60), so instead of using this space to store back-end apps, the system will keep this amount of memory free, with absolutely no purpose for it. LKM tuning can be useful for much older devices with 512 RAM, but who owns those anymore? 2GB is the modern “budget range”, even 4GB RAM devices are seeing as “middle-range” these days, so LMK tweaks are really outdated and useless.

I/O tweaks

In a lot of optimization scripts for Android you’ll often find tweaks that address the I/O subsystem. For example, lets take a look at the ThunderBolt! Script, which contains these lines: The first line will give the I/O scheduler instructions in dealing with an SSD, and the second increases the maximum size of the queue I/O from 128 to 1024 – because the $i variable contains a path to the tree of block devices in /sys, and the script runs in a loop. Following that, you find a line related to the CFQ scheduler: This is followed by more lines which belong to other planners, but ultimately, the first two commands are pointless because: A modern Linux kernel is able to understand what type of storage medium it is working with by default. A long input-output queue (such as 1024) is useless on a modern Android device, in fact its meaningless even on desktop – its really only recommended on heavy duty servers. Your phone is not a heavy duty Linux server. For an Android device, there are virtually no applications prioritized in the input-output and no mechanical driver, so the best planner is the noop / FIFO-queue, so this type of scheduler “tweak” is not doing anything special or meaningful to the I/O subsystem. In fact, all of those multi-screen list commands are better replaced by a simple cycle: This would enable the noop scheduler for all drives from the accumulation of I/O statistics, which should have a positive impact on performance, although a very tiny and almost completely negligible one. Another useless I/O tweak often found in performance scripts is the increased read-ahead values for SD cards up to 2MB. Read-ahead mechanism is for early data reads from the media, before the app requests access to that data. So basically, the kernel will be trying to figure out what data will be needed in the future, and pre-loads it into the RAM, which should thus reduce return time. This sounds great on paper, but the read-ahead algorithm is more often wrong, which leads to totally unnecessary operations of input-output, not to mention a high RAM consumption. High read-ahead values of between 1 – 8 MB are recommended in RAID-arrays, but for Android devices, its best to just leave the default value of 128 KB. Virtual Memory Management system tweaks Another common “optimization” technique is tuning the virtual memory management subsystem. This typically targets only two kernel variables, vm.dirty_background_ratio and vm.dirty_ratio, which are for adjusting the size of the buffer for storing “dirty” data. Dirty data is typically data that has been written to disk, but there’s more still in memory and waiting to be written to disk. Typical tweak values in both Linux distros and Androis to the VM management subsystem would be like: So what this tries to do is that when the dirty data buffer is 10% of the total amount of RAM, it awakens pdflush flow and starts to write data to the disk – if the operation of recording data on the disk will be too intense, the buffer will continue to grow, and when it reaches 20% of available RAM, the system will switch to the subsequent write operation in synchronous mode – without pre-buffer. This means the work of writing to disk application will be blocked, until the data is written to disk (AKA ‘lag’). What you should understand is that even if the buffer size does not reach 10%, the system will automatically kick in pdflush after 30 seconds. A combination of 10/20 is pretty reasonable, for example on a device with 1GB of RAM this would equal to 100/200MB of RAM, which is more than enough in terms of burst records where speed is often below the speed record in system NAND-memory, or SD-card, such as when installing apps or copying files from a computer. For some reason, script writers try to push this value even higher, to absurd rates. For example we can find in the Xplix optimization script a rate as high as 50/90. On a device with 1 GB of memory, this sets limit on a dirty buffer to 500/900 MB, which is completely useless for an Android device, because it would only work under constant recording on the disc – something that only happens on a heavy Linux server. ThunderBolt! Script uses a more reasonable value, but overall, its still fairly meaningless: The first two commands are run on smartphones with 512 MB of RAM, the second – with 1 GB, and others – with more than 1 GB. But in fact there is only one reason to change the default settings – a device with a very slow internal memory or memory card. In this case it is reasonable to spread the values ​​of the variables, that is, to make something like this: Then, when a surge system writes operations, without having to record data on the disc, up to the last will not switch to synchronous mode, which will allow applications to reduce lag when recording.

Additional Useless Tweaks and Performance Tunings

There are a lot more “optimizations” out there that really don’t do anything. Most of them simply have no effect whatsoever, while others may improve some aspect of performance, while degrading the device in other ways (usually it boils down to performance vs battery drain). Here are some additional popular optimizations that may or may not be useful, depending on the Android system and device.

Acceleration – The small acceleration to improve performance and undervolting – saves a little battery.Database Optimization – In theory this should give an improvement in device performance, but its doubtable.Zipalign – Ironically, despite the built-in Android SDK feature content alignment within the APK-file in the store you can find a lot of software is not transmitted through zipalign.Disable unnecessary system services, removing unused system and seldom-used third-party applications. Basically, uninstalling bloatware.Custom kernel with optimizations for a specific device (again, not all nuclei are equally good).Already described I/O scheduler noop.Saturation algorithm TCP Westwood – More efficiently used in the default Android Cubic for wireless networks, available in custom kernels.

Useless settings build.prop

LaraCraft304 from XDA Developers forum has conducted a study and found that an impressive number of /system/build.prop settings that are recommended for use “experts” do not exist in the source AOSP and CyanogenMod. Here’s the list:

The Most Common Blue Screen Errors on Windows 7, 8 and 10How to Fix Common Audio Interface Issues in Windows 10How to Fix your Samsung Gear Fit 2 Pro Common ProblemsFortnite Error ’errors.com.epicgames.common.server_error’ Most Common Android Optimization Myths Debunked - 92