Tuesday, 12 August 2014

Water Sensor Circuit with Alarm

Description

This is a simple musical alarm circuit which produces a musical tone when water or some conducting liquid comes in contact with the two sensor wires provided. The circuit is based on four transistors and one melody generator IC (M 3482).
When water comes in contact with the sensors wires A & B, the base of Q1 gets connected with the negative and it conducts. This makes Q2 and Q3 ON. When Q3 is ON the power is available for the music generator circuit and it starts producing 12 different melodies one after another. The music continues as long as there is water between the sensor wires. The POT R12 can be used as a volume controller.
Circuit diagram with Parts list


  • Assemble the circuit on a good quality PCB or common board.
  • Two insulated aluminum wires can be used as the sensor.
  • The IC1 must be mounted on an IC holder.
  • The speaker can be a 8 Ohm, ½ W tweeter.


http://www.hubcity.net
The 8Way Relay Board
This board is designed specifically to control the 5-motor Robot Arm sold by Baycom Technologies. It has no input facilities, but it is less expensive than combining the I/O Board with the Relay sub-Board. If you need lots of relays and no input, this is the way to go.
The 8-way Board with only 4 relays installed.



Parts List:
1 x PCB
8 x SPDT 12 volt Relays
1 x PCB mounted DB 25 Socket
1 x ULN2803 Integrated Circuit
1 x 1N4004 Diode
8 x 1N914 Diode
8 x 3mm Red LED
8 x 560 ohm 1/4 watt resistor
1 x 47uF Electrolytic Capacitor (up to 1000uF can be used depending on the power supply)
1 x 0.01uF (approx) Green Cap
1 x LM7812 Voltage regulator (T220 case)
1 x 1 amp Bridge Rectifier
1 x 2.1mm Power Socket
25 pin Serial Cable - Male to Male
12 - 24 volt power supply with 2.1mm Plug

Optional: 8 x 3way PCB mounted terminal mounts.
Kit Assembly:
Begin with the smaller items such as the 1N914 doides and then the IC. You can then start on the 'taller' items like LEDs, resistors, regulator, bridge, green cap, etc. Once you have the basic components soldered in, carefully install the DB25 socket. Make sure that none of the pins get bent over as you're trying to wiggle everything into place. It's a nightmare to unsolder the socket if you make a mistake! Leave the relays until last.



http://www.hubcity.net
Speech Recorder
Speech is a complex phenomenon. People rarely understand how is it produced and perceived. The naive perception is often that speech is built with words, and each word consists of phones. The reality is unfortunately very different. Speech is a dynamic process without clearly distinguished parts. It's always useful to get a sound editor and look into the recording of the speech and listen to it. Here is for example the speech recording in an audio editor.



All modern descriptions of speech are to some degree probabilistic. That means that there are no certain boundaries between units, or between words. Speech to text translation and other applications of speech are never 100% correct. That idea is rather unusual for software developers, who usually work with deterministic systems. And it creates a lot of issues specific only to speech technology.
In current practice, speech structure is understood as follows:
Speech is a continuous audio stream where rather stable states mix with dynamically changed states. In this sequence of states, one can define more or less similar classes of sounds, or phones. Words are understood to be built of phones, but this is certainly not true. The acoustic properties of a waveform corresponding to a phone can vary greatly depending on many factors - phone context, speaker, style of speech and so on. The so called coarticulation makes phones sound very different from their “canonical” representation. Next, since transitions between words are more informative than stable regions, developers often talk about diphones - parts of phones between two consecutive phones. Sometimes developers talk about subphonetic units - different substates of a phone. Often three or more regions of a different nature can easily be found.
The number three is easily explained. The first part of the phone depends on its preceding phone, the middle part is stable, and the next part depends on the subsequent phone. That's why there are often three states in a phone selected for speech recognition.
Sometimes phones are considered in context. There are triphones or even quinphones. But note that unlike phones and diphones, they are matched with the same range in waveform as just phones. They just differ by name. That's why we prefer to call this object senone. A senone's dependence on context could be more complex than just left and right context. It can be a rather complex function defined by a decision tree, or in some other way.
Next, phones build subword units, like syllables. Sometimes, syllables are defined as “reduction-stable entities”. To illustrate, when speech becomes fast, phones often change, but syllables remain the same. Also, syllables are related to intonational contour. There are other ways to build subwords - morphologically-based in morphology-rich languages or phonetically-based. Subwords are often used in open vocabulary speech recognition.
Subwords form words. Words are important in speech recognition because they restrict combinations of phones significantly. If there are 40 phones and an average word has 7 phones, there must be 40^7 words. Luckily, even a very educated person rarely uses more then 20k words in his practice, which makes recognition way more feasible.
Words and other non-linguistic sounds, which we call fillers (breath, um, uh, cough), form utterances. They are separate chunks of audio between pauses. They don't necessary match sentences, which are more semantic concepts.
On the top of this, there are dialog acts like turns, but they go beyond the purpose of the document.
The common way to recognize speech is the following: we take waveform, split it on utterances by silences then try to recognize what's being said in each utterance. To do that we want to take all possible combinations of words and try to match them with the audio. We choose the best matching combination. There are few important things in this match.
First of all it's a concept of features. Since number of parameters is large, we are trying to optimize it. Numbers that are calculated from speech usually by dividing speech on frames. Then for each frame of length typically 10 milliseconds we extract 39 numbers that represent the speech. That's called feature vector. They way to generates numbers is a subject of active investigation, but in simple case it's a derivative from spectrum.
Second it's a concept of the model. Model describes some mathematical object that gathers common attributes of the spoken word. In practice, for audio model of senone is gaussian mixture of it's three states - to put it simple, it's a most probable feature vector. From concept of the model the following issues raised - how good does model fits practice, can model be made better of it's internal model problems, how adaptive model is to the changed conditions.
The model of speech is called Hidden Markov Model or HMM, it's a generic model that describes black-box communication channel. In this model process is described as a sequence of states which change each other with certain probability. This model is intended to describe any sequential process like speech. It has been proven to be really practical for speech decoding.
Third, it's a matching process itself. Since it would take a huge time more than universe existed to compare all feature vectors with all models, the search is often optimized by many tricks. At any points we maintain best matching variants and extend them as time goes producing best matching variants for the next frame.
A Lattice is a directed graph that represents variants of the recognition. Often, getting the best match is not practical; in that case, lattices are good intermediate formats to represent the recognition result.
N-best lists of variants are like lattices, though their representations are not as dense as the lattice ones.
Word confusion networks (sausages) are lattices where the strict order of nodes is taken from lattice edges.
Speech database - a set of typical recordings from the task database. If we develop dialog system it might be dialogs recorded from users. For dictation system it might be reading recordings. Speech databases are used to train, tune and test the decoding systems.
Text databases - sample texts collected for language model training and so on. Usually, databases of texts are collected in sample text form. The issue with collection is to put present documents (PDFs, web pages, scans) into spoken text form. That is, you need to remove tags and headings, to expand numbers to their spoken form, and to expand abbreviations.
When speech recognition is being developed, the most complex issue is to make search precise (consider as many variants to match as possible) and to make it fast enough to not run for ages. There are also issues with making the model match the speech since models aren't perfect.
Usually the system is tested on a test database that is meant to represent the target task correctly.
The following characteristics are used:
Word error rate. Let we have original text and recognition text of length of N words. From them the I words were inserted D words were deleted and S words were substituted Word error rate is
WER = (I + D + S) / N
WER is usually measured in percent.
Accuracy. It is almost the same thing as word error rate, but it doesn't count insertions.
Accuracy = (N - D - S) / N
Accuracy is actually a worse measure for most tasks, since insertions are also important in final results. But for some tasks, accuracy is a reasonable measure of the decoder performance.
Speed. Suppose the audio file was 2 hours and the decoding took 6 hours. Then speed is counted as 3xRT.
ROC curves. When we talk about detection tasks, there are false alarms and hits/misses; ROC curves are used. A curve is a graphic that describes the number of false alarms vs number of hits, and tries to find optimal point where the number of false alarms is small and number of hits matches 100%.
There are other properties that aren't often taken into account, but still important for many practical applications. Your first task should be to build such a measure and systematically apply it during the system development. Your second task is to collect the test database and test how does your application perform.
PARTS LIST

R1 = 1k
R2 = 470k
R3 = 10k
R4 = 5k1
R5 = 4k7
R6,7 = 100k
R8,9 = 1M
R10 = 10R
C1-10 = 100nF/63V
C11 = 47nF/63V
E1,4 = 220uF/16V
E2 = 4u7F/16V
E3 = 22uF/16V
IC1 = ISD2560 + socket
IC2 =LM78L05
IC3 = LM386 + socket
MIC = Condensator microphone
S1,2 = Pushbutton (S1 = Start and Pause. S2 = Stop and Reset)
S3 = Change-over switch
Hψjttaler = 8R speaker


 


      The 2 pushbuttons = S1: Start/Pause. S2: Stop/Reset.

If you want to play your message, put S3 at Play. Then push S1 to start playing and again to pause.

If you want to delete your message press S2 twice.

If you want to record a message put S3 at Rec. Then push S1 to start and S2 to stop.


http://www.hubcity.net

Speaker to microphone converter circuit

Description

This circuit is a simple approach for converting a loud speaker into a microphone. When the sound waves fall on the diaphragm of a speaker, there will be fluctuations in the coil and there will be a small proportional induced voltage. Usually this induced voltage is very low in magnitude and useless. Here in the circuit the low voltage is amplified using transistors to produce a reasonable output. The transistor Q1 is wired in common base mode and produces the required voltage gain. The transistor Q2 is wired as an emitter follower to produce enough current gain. The voice quality of this circuit will not be as much as a conventional microphone but quite reasonable quality can be obtained. To set up the circuit, keep the preset R2 at around 10 Ohms and connect the battery. Now adjust R2 to obtain the optimum sound quality.
Circuit diagram with Parts list.

  • Assemble the circuit on a general purpose PCB.
  • Power the circuit from a 9 V PP3 battery.
  • A 3 inch speaker can be used as K1.
  • All capacitors must be rated at least 15V.
  • An 8 Ohm speaker or head phone can be connected at the output to hear the picked sound.


http://www.hubcity.net

Software delay routine in 8051 microcontroller

In an 8051 microcontroller, it requires 12  cycles of the processor clock for executing a single instruction cycle.  For an 8051 microcontroller clocked by a 12MHz crystal, the time taken for executing one instruction cycle is 1µS and it is according to the equation,  Time for 1 instruction cycle= 12 /12MHz = 1µS.  The shortest instructions will execute in 1µS and other instructions will take 2 or more micro seconds depending up on the size of the instruction. Thus a time delay of any magnitude can be generated by looping suitable instructions a required number of time. Any way, keep one thing in mind that software delay is not very accurate because we cannot exactly predict how much time its takes for executing a single instruction. Generally an instruction will be executed  in the theoretical amount of time but some times it may advance or retard due to other reasons. Therefore it is better to use  8051 Timer for generating delay in time critical applications. How ever software delay routines are very easy to develop  and well enough for less critical and simple applications.

Program  to delay 1mS.

 
DELAY: MOV R6,#250D
       MOV R7,#250D
LABEL1: DJNZ R6,LABEL1
LABEL2: DJNZ R7,LABEL2
        RET
The above program roughly produces a delay of 1mS. The instruction DJNZ Rx,LABEL is a two cycle instruction and it will take 2µS to execute. So repeating this instruction 500 times will generate a delay of 500 x 2µS = 1mS. The program is written as a subroutine and it works this way. When called the sub routine DELAY, Registers R6 and R7 are loaded by 250D. Then DJNZ R6,LABEL1 is executed until R6 becomes zero and then DJNZ R7,LABEL2 is executed until R7 is zero. This creates a loop of DJNZ Rx, LABEL  repeating 500 times and the result will be a 1mS delay. As I said earlier, this just a rough delay and when you test this program you may find  slight differences in the output. You can make adjustments on the initial values of R6 and R7 to make the result more accurate. 

Program to delay 1 second.

The program shown below produces a delay of  around 1 second.  In this program subroutine for delaying 1mS (DELAY) is called 4 times back to back and the entire cycle is repeated 250 times. As result, a delay of  4 x 1mS x 250 = 1000mS = 1 second is produced. 
DELAY1: MOV R5,#250D
LABEL: ACALL DELAY
       ACALL DELAY
       ACALL DELAY
       ACALL DELAY
       DJNZ R5,LABEL
       RET
DELAY: MOV R6,#250D
       MOV R7,#250D
LOOP1: DJNZ R6,LOOP1
LOOP2: DJNZ R7,LOOP1
       RET

Square wave generation using 8051.

Using software delay subroutines square waves over a wide frequency range (limited by the crystal frequency) can be produced using 8051. The idea is very simple, run a  subroutine with delay equal to half the time period of the square wave, complement any port pin after the delay routine is finished, repeat the delay subroutine again, complement the same port pin again and repeat the cycle again and again over time. This will result in a square wave of the required frequency at the corresponding port pin. Circuit diagram for generating square wave using 8051 is shown below. The same circuit can be used for generating any frequency but the program is different.
Square wave generation using 8051

Program for generating 1KHz square wave.
ORG 000H
MOV P1,#00000000B
MOV A,#00000000B
MAIN: MOV R6,#220D
LOOP1:DJNZ R6,LOOP1
      CPL A
      MOV P1,A
      SJMP MAIN
      END
Program for generating 2KHz square wave.
ORG 000H
MOV P1,#00000000B
MOV A,#00000000B
MAIN: MOV R6,#220D
      MOV R7,#183D
LOOP1:DJNZ R6,LOOP1
LOOP2:DJNZ R7,LOOP2
      CPL A
      MOV P1,A
      SJMP MAIN
      END
Program for generating 10KHz square wave.
ORG 000H
MOV P1,#00000000B
MOV A,#00000000B
MAIN: MOV R6,#20D
LOOP1:DJNZ R6,LOOP1
      CPL A
      MOV P1,A
      SJMP MAIN
      END



http://www.hubcity.net
Scoring game circuit
Description
A simple scoring game circuit that can be used for all occasions when a dice is needed.The circuit is based on a NE555 timer,a 74LS192 counter,a74LS247 decoder and a & segment LED display.The timer IC1 will produce the clock for the counter IC(IC2) whose frequency is determined by R1 and C2.When S2 is pressed the IC2 will count in up mode and when S3 is pressed the IC2 will count in down mode.The IC 3 will decode the count to display it on the seven segment LED display .Thats about the working of the circuit.The circuit is designed strictly sticking on to the basics of counters and is a good one for beginners.There is nothing big deal.
Circuit diagram with Parts list.


  • To play the game switch the power ON and press S1 to reset the counter.
  • Now press S2 or S3 and release .The IC2 will hold the last count .Now press S4 to see the score on display.That’s your score.Now the second person can try.
  • Each time one tries, he should press the S1 to reset the count and then press S2 or S3 and then S4 to see the score.
  • Circuit can be powered from a 9V radio cell or a 9V regulated DC power supply .


http://www.hubcity.net
Ringing phone light flasher
Description.
This circuit can be used to glow a bulb or turn a alarm when the phone rings.Using this circuit  nuissence of telephone ring at night can be avoided.
When the telephone rings the line voltage rises to 72 volts.At this time the LED in the opto-coupler glows and the transistor conducts. Due to this the transistor Q1 conducts.This makes the relay ON.The load connected to the relay whether bulb or bell turns ON.

Circuit diagram with Parts list. 



  • Assemble the circuit diagram on a good quality PCB or common board.
  • Use a 5V DC power supply for powering the circuit.
  • The load can be connected through NC1,NC2 & C points of the relay.



http://www.hubcity.net