Suppose that the function is defined on the intervals
#(a_1,a_2) \cup (a_2,a_3) \cup ... \cup (a_{n-1},a_n)#
where the interals can be open or closed, and #a_1# and/or #a_n# are possibly #\pm\infty#, and inside each interval #(a_i, a_{i+1})# #f(x)# is defined as #f_i(x)#.
There are two cases: if you need to compute #lim_{x \to c} f(x)#, where #c \in (a_i, a_{i+1})# for some #i#, then you simply compute #lim_{x \to c} f_i(x)#, as if it was a "normal" function.
The only special case is when you want to compute the limit with #x# tending towards a border point. In this case, you simply compute the left and right limits, using the correct definitions: if you have #f_{i-1}(x)# in #(a_{i-1},a_i)# and #f_i(x)# in #(a_i,a_{i+1})#, you have
#lim_{x\to a_i} f(x) = lim_{x\to a_i^-}f_{i-1}(x) = lim_{x\to a_i^+}f_i(x)#
So, the limit exists if and only if the left limit of #f_i(x)# and the right limit of #f_{i-1}# exist and are the same.