Answer:
<h3>ITS DEPENDS IF I'M INTERESTED</h3>
Explanation:
HBU?^_________^
The three most common operating systems for personal computers are Microsoft Windows, Apple Mac OS X, and Linux.
Yes it's the process of entering data into a database.
Answer:
Check the explanation
Explanation:
#define _MULTI_THREADED
#include <pthread.h>
#include <stdio.h>
#include <errno.h>
#define THREADS 2
int i=1,j,k,l;
int argcG;
char *argvG[1000];
void *threadfunc(void *parm)
{
int *num;
num=(int*)parm;
while(1)
{
if(i>=argcG)
break;
if(*num ==1)
if(argvG[i][0]=='a' ||argvG[i][0]=='2'||argvG[i][0]=='i' ||argvG[i][0]=='o' ||argvG[i][0]=='u')
{
printf("%s\n",argvG[i]);
i++;
continue;
}
if(*num ==2)
if(!(argvG[i][0]=='a' ||argvG[i][0]=='2'||argvG[i][0]=='i' ||argvG[i][0]=='o' ||argvG[i][0]=='u'))
{
printf("%s\n",argvG[i]);
i++;
continue;
}
sched_yield();
}
return NULL;
}
int main(int argc, char *argv[])
{
pthread_t threadid[THREADS];
int rc=0;
int loop=0;
int arr[2]={1,2};
argcG=argc;
for(rc=0;rc<argc;rc++)
argvG[rc]=argv[rc];
printf("Creating %d threads\n", THREADS);
for (loop=0; loop<THREADS; ++loop) {
rc =pthread_create(&threadid[loop], NULL, threadfunc,&arr[loop]);
}
for (loop=0; loop<THREADS; ++loop) {
rc = pthread_join(threadid[loop], NULL);
}
printf("Main completed\n");
return 0;
}
The below attached image is a sample output
Answer:
It we were asked to develop a new data compression tool, it is recommended to use Huffman coding since it is easy to implement and it is widely used.
Explanation:
The pros and the cons of Huffman coding
Huffman coding is one of the most simple compressing encoding schemes and can be implemented easily and efficiently. It also has the advantage of not being patented like other methods (e.g. arithmetic codingfor example) which however are superior to Huffman coding in terms of resulting code length.
One thing not mentioned so far shall not be kept secret however: to decode our 96 bit of “brief wit” the potential receiver of the bit sequence does need the codes for all letters! In fact he doesn’t even know which letters are encoded at all! Adding this information, which is also called the “Huffman table” might use up more space than the original uncompressed sentence!
However: for longer texts the savings outweigh the added Huffman table length. One can also agree on a Huffman table to use that isn’t optimized for the exact text to be transmitted but is good in general. In the English language for example the letters “e” and “t” occur most often while “q” and “z” make up the least part of an average text and one can agree on one Huffman table to use that on average produces a good (=short) result. Once agreed upon it doesn’t have to be transmitted with every encoded text again.
One last thing to remember is that Huffman coding is not restricted to letters and text: it can be used for just any symbols, numbers or “abstract things” that can be assigned a bit sequence to. As such Huffman coding plays an important role in other compression algorithms like JPG compression for photos and MP3 for audio files.
The pros and the cons of Lempel-Ziv-Welch
The size of files usually increases to a great extent when it includes lots of repetitive data or monochrome images. LZW compression is the best technique for reducing the size of files containing more repetitive data. LZW compression is fast and simple to apply. Since this is a lossless compression technique, none of the contents in the file are lost during or after compression. The decompression algorithm always follows the compression algorithm. LZW algorithm is efficient because it does not need to pass the string table to the decompression code. The table can be recreated as it was during compression, using the input stream as data. This avoids insertion of large string translation table with the compression data.